00:00:00.001 Started by upstream project "autotest-per-patch" build number 126152 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.227 > git --version # 'git version 2.39.2' 00:00:00.227 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.944 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.954 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.966 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.966 > git config core.sparsecheckout # timeout=10 00:00:04.974 > git read-tree -mu HEAD # timeout=10 00:00:04.992 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.009 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.009 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.092 [Pipeline] Start of Pipeline 00:00:05.107 [Pipeline] library 00:00:05.108 Loading library shm_lib@master 00:00:05.108 Library shm_lib@master is cached. Copying from home. 00:00:05.124 [Pipeline] node 00:00:05.134 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.135 [Pipeline] { 00:00:05.144 [Pipeline] catchError 00:00:05.145 [Pipeline] { 00:00:05.156 [Pipeline] wrap 00:00:05.162 [Pipeline] { 00:00:05.168 [Pipeline] stage 00:00:05.169 [Pipeline] { (Prologue) 00:00:05.184 [Pipeline] echo 00:00:05.185 Node: VM-host-SM9 00:00:05.188 [Pipeline] cleanWs 00:00:05.196 [WS-CLEANUP] Deleting project workspace... 00:00:05.196 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.201 [WS-CLEANUP] done 00:00:05.356 [Pipeline] setCustomBuildProperty 00:00:05.443 [Pipeline] httpRequest 00:00:05.465 [Pipeline] echo 00:00:05.466 Sorcerer 10.211.164.101 is alive 00:00:05.472 [Pipeline] httpRequest 00:00:05.475 HttpMethod: GET 00:00:05.475 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.476 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.484 Response Code: HTTP/1.1 200 OK 00:00:05.484 Success: Status code 200 is in the accepted range: 200,404 00:00:05.484 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.685 [Pipeline] sh 00:00:08.963 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.977 [Pipeline] httpRequest 00:00:08.998 [Pipeline] echo 00:00:08.999 Sorcerer 10.211.164.101 is alive 00:00:09.006 [Pipeline] httpRequest 00:00:09.009 HttpMethod: GET 00:00:09.010 URL: http://10.211.164.101/packages/spdk_4835eb82bb1be9e262aefa045af927257ebac260.tar.gz 00:00:09.010 Sending request to url: http://10.211.164.101/packages/spdk_4835eb82bb1be9e262aefa045af927257ebac260.tar.gz 00:00:09.023 Response Code: HTTP/1.1 200 OK 00:00:09.023 Success: Status code 200 is in the accepted range: 200,404 00:00:09.024 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_4835eb82bb1be9e262aefa045af927257ebac260.tar.gz 00:00:51.574 [Pipeline] sh 00:00:51.872 + tar --no-same-owner -xf spdk_4835eb82bb1be9e262aefa045af927257ebac260.tar.gz 00:00:55.179 [Pipeline] sh 00:00:55.455 + git -C spdk log --oneline -n5 00:00:55.455 4835eb82b nvmf: consolidate listener addition in avahi_entry_group_add_listeners 00:00:55.455 719d03c6a sock/uring: only register net impl if supported 00:00:55.455 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:55.455 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:55.455 6c7c1f57e accel: add sequence outstanding stat 00:00:55.474 [Pipeline] writeFile 00:00:55.490 [Pipeline] sh 00:00:55.767 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.798 [Pipeline] sh 00:00:56.081 + cat autorun-spdk.conf 00:00:56.081 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.081 SPDK_TEST_NVMF=1 00:00:56.081 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.081 SPDK_TEST_URING=1 00:00:56.081 SPDK_TEST_USDT=1 00:00:56.081 SPDK_RUN_UBSAN=1 00:00:56.081 NET_TYPE=virt 00:00:56.081 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.088 RUN_NIGHTLY=0 00:00:56.091 [Pipeline] } 00:00:56.107 [Pipeline] // stage 00:00:56.125 [Pipeline] stage 00:00:56.127 [Pipeline] { (Run VM) 00:00:56.142 [Pipeline] sh 00:00:56.421 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.421 + echo 'Start stage prepare_nvme.sh' 00:00:56.421 Start stage prepare_nvme.sh 00:00:56.421 + [[ -n 3 ]] 00:00:56.421 + disk_prefix=ex3 00:00:56.421 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:56.421 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:56.421 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:56.421 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.421 ++ SPDK_TEST_NVMF=1 00:00:56.421 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.421 ++ SPDK_TEST_URING=1 00:00:56.421 ++ SPDK_TEST_USDT=1 00:00:56.421 ++ SPDK_RUN_UBSAN=1 00:00:56.421 ++ NET_TYPE=virt 00:00:56.421 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.421 ++ RUN_NIGHTLY=0 00:00:56.421 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:56.421 + nvme_files=() 00:00:56.421 + declare -A nvme_files 00:00:56.421 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.421 + nvme_files['nvme.img']=5G 00:00:56.421 + nvme_files['nvme-cmb.img']=5G 00:00:56.421 + nvme_files['nvme-multi0.img']=4G 00:00:56.421 + nvme_files['nvme-multi1.img']=4G 00:00:56.421 + nvme_files['nvme-multi2.img']=4G 00:00:56.421 + nvme_files['nvme-openstack.img']=8G 00:00:56.421 + nvme_files['nvme-zns.img']=5G 00:00:56.421 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.421 + (( SPDK_TEST_FTL == 1 )) 00:00:56.421 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.421 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.421 + for nvme in "${!nvme_files[@]}" 00:00:56.421 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:56.421 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.421 + for nvme in "${!nvme_files[@]}" 00:00:56.421 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:56.421 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.421 + for nvme in "${!nvme_files[@]}" 00:00:56.421 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:56.421 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.422 + for nvme in "${!nvme_files[@]}" 00:00:56.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:56.422 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.422 + for nvme in "${!nvme_files[@]}" 00:00:56.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:56.422 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.422 + for nvme in "${!nvme_files[@]}" 00:00:56.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:56.422 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.422 + for nvme in "${!nvme_files[@]}" 00:00:56.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:56.680 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.680 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:56.680 + echo 'End stage prepare_nvme.sh' 00:00:56.680 End stage prepare_nvme.sh 00:00:56.692 [Pipeline] sh 00:00:56.972 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:56.972 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:00:56.972 00:00:56.972 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:56.972 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:56.972 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:56.972 HELP=0 00:00:56.972 DRY_RUN=0 00:00:56.972 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:56.972 NVME_DISKS_TYPE=nvme,nvme, 00:00:56.972 NVME_AUTO_CREATE=0 00:00:56.972 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:56.972 NVME_CMB=,, 00:00:56.972 NVME_PMR=,, 00:00:56.972 NVME_ZNS=,, 00:00:56.972 NVME_MS=,, 00:00:56.972 NVME_FDP=,, 00:00:56.972 SPDK_VAGRANT_DISTRO=fedora38 00:00:56.972 SPDK_VAGRANT_VMCPU=10 00:00:56.972 SPDK_VAGRANT_VMRAM=12288 00:00:56.972 SPDK_VAGRANT_PROVIDER=libvirt 00:00:56.972 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:56.972 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:56.972 SPDK_OPENSTACK_NETWORK=0 00:00:56.972 VAGRANT_PACKAGE_BOX=0 00:00:56.972 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:56.972 FORCE_DISTRO=true 00:00:56.972 VAGRANT_BOX_VERSION= 00:00:56.972 EXTRA_VAGRANTFILES= 00:00:56.972 NIC_MODEL=e1000 00:00:56.972 00:00:56.972 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:56.972 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:01.163 Bringing machine 'default' up with 'libvirt' provider... 00:01:01.163 ==> default: Creating image (snapshot of base box volume). 00:01:01.423 ==> default: Creating domain with the following settings... 00:01:01.423 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721027109_99cd800e99a78dff0c35 00:01:01.423 ==> default: -- Domain type: kvm 00:01:01.423 ==> default: -- Cpus: 10 00:01:01.423 ==> default: -- Feature: acpi 00:01:01.423 ==> default: -- Feature: apic 00:01:01.423 ==> default: -- Feature: pae 00:01:01.423 ==> default: -- Memory: 12288M 00:01:01.423 ==> default: -- Memory Backing: hugepages: 00:01:01.423 ==> default: -- Management MAC: 00:01:01.423 ==> default: -- Loader: 00:01:01.423 ==> default: -- Nvram: 00:01:01.423 ==> default: -- Base box: spdk/fedora38 00:01:01.423 ==> default: -- Storage pool: default 00:01:01.423 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721027109_99cd800e99a78dff0c35.img (20G) 00:01:01.423 ==> default: -- Volume Cache: default 00:01:01.423 ==> default: -- Kernel: 00:01:01.423 ==> default: -- Initrd: 00:01:01.423 ==> default: -- Graphics Type: vnc 00:01:01.423 ==> default: -- Graphics Port: -1 00:01:01.423 ==> default: -- Graphics IP: 127.0.0.1 00:01:01.423 ==> default: -- Graphics Password: Not defined 00:01:01.423 ==> default: -- Video Type: cirrus 00:01:01.423 ==> default: -- Video VRAM: 9216 00:01:01.423 ==> default: -- Sound Type: 00:01:01.423 ==> default: -- Keymap: en-us 00:01:01.423 ==> default: -- TPM Path: 00:01:01.423 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:01.423 ==> default: -- Command line args: 00:01:01.423 ==> default: -> value=-device, 00:01:01.423 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:01.423 ==> default: -> value=-drive, 00:01:01.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:01.423 ==> default: -> value=-device, 00:01:01.423 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.423 ==> default: -> value=-device, 00:01:01.423 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:01.423 ==> default: -> value=-drive, 00:01:01.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:01.423 ==> default: -> value=-device, 00:01:01.423 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.423 ==> default: -> value=-drive, 00:01:01.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:01.423 ==> default: -> value=-device, 00:01:01.423 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.423 ==> default: -> value=-drive, 00:01:01.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:01.423 ==> default: -> value=-device, 00:01:01.423 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.423 ==> default: Creating shared folders metadata... 00:01:01.423 ==> default: Starting domain. 00:01:02.800 ==> default: Waiting for domain to get an IP address... 00:01:20.914 ==> default: Waiting for SSH to become available... 00:01:20.914 ==> default: Configuring and enabling network interfaces... 00:01:24.200 default: SSH address: 192.168.121.117:22 00:01:24.200 default: SSH username: vagrant 00:01:24.200 default: SSH auth method: private key 00:01:26.732 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:34.834 ==> default: Mounting SSHFS shared folder... 00:01:35.766 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:35.766 ==> default: Checking Mount.. 00:01:36.702 ==> default: Folder Successfully Mounted! 00:01:36.702 ==> default: Running provisioner: file... 00:01:37.636 default: ~/.gitconfig => .gitconfig 00:01:38.204 00:01:38.204 SUCCESS! 00:01:38.204 00:01:38.204 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:38.205 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:38.205 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:38.205 00:01:38.213 [Pipeline] } 00:01:38.230 [Pipeline] // stage 00:01:38.240 [Pipeline] dir 00:01:38.241 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:38.242 [Pipeline] { 00:01:38.256 [Pipeline] catchError 00:01:38.258 [Pipeline] { 00:01:38.273 [Pipeline] sh 00:01:38.555 + vagrant ssh-config --host vagrant 00:01:38.556 + sed -ne /^Host/,$p 00:01:38.556 + tee ssh_conf 00:01:41.840 Host vagrant 00:01:41.840 HostName 192.168.121.117 00:01:41.840 User vagrant 00:01:41.840 Port 22 00:01:41.840 UserKnownHostsFile /dev/null 00:01:41.840 StrictHostKeyChecking no 00:01:41.840 PasswordAuthentication no 00:01:41.840 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:41.840 IdentitiesOnly yes 00:01:41.840 LogLevel FATAL 00:01:41.840 ForwardAgent yes 00:01:41.840 ForwardX11 yes 00:01:41.840 00:01:42.113 [Pipeline] withEnv 00:01:42.115 [Pipeline] { 00:01:42.129 [Pipeline] sh 00:01:42.406 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:42.406 source /etc/os-release 00:01:42.406 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.406 # Minimal, systemd-like check. 00:01:42.406 if [[ -e /.dockerenv ]]; then 00:01:42.406 # Clear garbage from the node's name: 00:01:42.406 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.406 # $HOSTNAME is the actual container id 00:01:42.406 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.406 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.406 # We can assume this is a mount from a host where container is running, 00:01:42.406 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.406 container="$(< /etc/hostname) ($agent)" 00:01:42.406 else 00:01:42.406 # Fallback 00:01:42.406 container=$agent 00:01:42.406 fi 00:01:42.406 fi 00:01:42.406 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.406 00:01:42.676 [Pipeline] } 00:01:42.696 [Pipeline] // withEnv 00:01:42.704 [Pipeline] setCustomBuildProperty 00:01:42.720 [Pipeline] stage 00:01:42.722 [Pipeline] { (Tests) 00:01:42.742 [Pipeline] sh 00:01:43.019 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:43.291 [Pipeline] sh 00:01:43.569 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.840 [Pipeline] timeout 00:01:43.840 Timeout set to expire in 30 min 00:01:43.842 [Pipeline] { 00:01:43.856 [Pipeline] sh 00:01:44.133 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:44.701 HEAD is now at 4835eb82b nvmf: consolidate listener addition in avahi_entry_group_add_listeners 00:01:44.713 [Pipeline] sh 00:01:44.992 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:45.293 [Pipeline] sh 00:01:45.573 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.847 [Pipeline] sh 00:01:46.126 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:46.385 ++ readlink -f spdk_repo 00:01:46.385 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:46.385 + [[ -n /home/vagrant/spdk_repo ]] 00:01:46.385 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:46.385 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:46.385 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:46.385 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:46.385 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:46.385 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:46.385 + cd /home/vagrant/spdk_repo 00:01:46.385 + source /etc/os-release 00:01:46.385 ++ NAME='Fedora Linux' 00:01:46.385 ++ VERSION='38 (Cloud Edition)' 00:01:46.385 ++ ID=fedora 00:01:46.385 ++ VERSION_ID=38 00:01:46.385 ++ VERSION_CODENAME= 00:01:46.385 ++ PLATFORM_ID=platform:f38 00:01:46.385 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:46.385 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.385 ++ LOGO=fedora-logo-icon 00:01:46.385 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:46.385 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.385 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:46.385 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.385 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.385 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.385 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:46.385 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.385 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:46.385 ++ SUPPORT_END=2024-05-14 00:01:46.385 ++ VARIANT='Cloud Edition' 00:01:46.385 ++ VARIANT_ID=cloud 00:01:46.385 + uname -a 00:01:46.385 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:46.385 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:46.643 Hugepages 00:01:46.643 node hugesize free / total 00:01:46.643 node0 1048576kB 0 / 0 00:01:46.643 node0 2048kB 0 / 0 00:01:46.643 00:01:46.643 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.902 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:46.902 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:46.902 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:46.902 + rm -f /tmp/spdk-ld-path 00:01:46.902 + source autorun-spdk.conf 00:01:46.902 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.902 ++ SPDK_TEST_NVMF=1 00:01:46.902 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.902 ++ SPDK_TEST_URING=1 00:01:46.902 ++ SPDK_TEST_USDT=1 00:01:46.902 ++ SPDK_RUN_UBSAN=1 00:01:46.902 ++ NET_TYPE=virt 00:01:46.902 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.902 ++ RUN_NIGHTLY=0 00:01:46.902 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.902 + [[ -n '' ]] 00:01:46.902 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:46.902 + for M in /var/spdk/build-*-manifest.txt 00:01:46.902 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.902 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.902 + for M in /var/spdk/build-*-manifest.txt 00:01:46.902 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.902 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.902 ++ uname 00:01:46.902 + [[ Linux == \L\i\n\u\x ]] 00:01:46.902 + sudo dmesg -T 00:01:46.902 + sudo dmesg --clear 00:01:46.902 + dmesg_pid=5151 00:01:46.902 + [[ Fedora Linux == FreeBSD ]] 00:01:46.902 + sudo dmesg -Tw 00:01:46.902 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.902 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.902 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.902 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.902 + export FIO_BIN=/usr/src/fio-static/fio 00:01:46.902 + FIO_BIN=/usr/src/fio-static/fio 00:01:46.902 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.902 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.902 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.902 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.902 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.902 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.902 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.902 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.902 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:46.902 Test configuration: 00:01:46.902 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.902 SPDK_TEST_NVMF=1 00:01:46.902 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.902 SPDK_TEST_URING=1 00:01:46.902 SPDK_TEST_USDT=1 00:01:46.902 SPDK_RUN_UBSAN=1 00:01:46.902 NET_TYPE=virt 00:01:46.902 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.180 RUN_NIGHTLY=0 07:05:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:47.180 07:05:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.180 07:05:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.180 07:05:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.180 07:05:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.180 07:05:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.180 07:05:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.180 07:05:55 -- paths/export.sh@5 -- $ export PATH 00:01:47.180 07:05:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.180 07:05:55 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:47.180 07:05:55 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:47.180 07:05:55 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721027155.XXXXXX 00:01:47.180 07:05:55 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721027155.0Unhol 00:01:47.180 07:05:55 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:47.180 07:05:55 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:47.180 07:05:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:47.181 07:05:55 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:47.181 07:05:55 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.181 07:05:55 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:47.181 07:05:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:47.181 07:05:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.181 07:05:55 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:47.181 07:05:55 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:47.181 07:05:55 -- pm/common@17 -- $ local monitor 00:01:47.181 07:05:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.181 07:05:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.181 07:05:55 -- pm/common@21 -- $ date +%s 00:01:47.181 07:05:55 -- pm/common@25 -- $ sleep 1 00:01:47.181 07:05:55 -- pm/common@21 -- $ date +%s 00:01:47.181 07:05:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721027155 00:01:47.181 07:05:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721027155 00:01:47.181 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721027155_collect-vmstat.pm.log 00:01:47.181 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721027155_collect-cpu-load.pm.log 00:01:48.173 07:05:56 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:48.173 07:05:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.173 07:05:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.173 07:05:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:48.173 07:05:56 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.173 Mon Jul 15 07:05:56 AM UTC 2024 00:01:48.173 07:05:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.173 v24.09-pre-203-g4835eb82b 00:01:48.173 07:05:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.173 07:05:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.173 07:05:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.173 07:05:56 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:48.173 07:05:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:48.173 07:05:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.173 ************************************ 00:01:48.173 START TEST ubsan 00:01:48.173 ************************************ 00:01:48.173 using ubsan 00:01:48.173 07:05:56 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:48.173 00:01:48.173 real 0m0.000s 00:01:48.173 user 0m0.000s 00:01:48.173 sys 0m0.000s 00:01:48.173 07:05:56 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:48.173 07:05:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.173 ************************************ 00:01:48.173 END TEST ubsan 00:01:48.173 ************************************ 00:01:48.173 07:05:56 -- common/autotest_common.sh@1142 -- $ return 0 00:01:48.173 07:05:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.173 07:05:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.173 07:05:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.173 07:05:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.173 07:05:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.173 07:05:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.173 07:05:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.173 07:05:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.173 07:05:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:48.173 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:48.173 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:48.740 Using 'verbs' RDMA provider 00:02:04.557 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:14.541 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:14.797 Creating mk/config.mk...done. 00:02:14.797 Creating mk/cc.flags.mk...done. 00:02:14.797 Type 'make' to build. 00:02:14.797 07:06:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:14.797 07:06:23 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:14.797 07:06:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:14.797 07:06:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.797 ************************************ 00:02:14.797 START TEST make 00:02:14.797 ************************************ 00:02:14.797 07:06:23 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:15.055 make[1]: Nothing to be done for 'all'. 00:02:29.925 The Meson build system 00:02:29.925 Version: 1.3.1 00:02:29.925 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:29.925 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:29.925 Build type: native build 00:02:29.925 Program cat found: YES (/usr/bin/cat) 00:02:29.925 Project name: DPDK 00:02:29.925 Project version: 24.03.0 00:02:29.925 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:29.926 C linker for the host machine: cc ld.bfd 2.39-16 00:02:29.926 Host machine cpu family: x86_64 00:02:29.926 Host machine cpu: x86_64 00:02:29.926 Message: ## Building in Developer Mode ## 00:02:29.926 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.926 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:29.926 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.926 Program python3 found: YES (/usr/bin/python3) 00:02:29.926 Program cat found: YES (/usr/bin/cat) 00:02:29.926 Compiler for C supports arguments -march=native: YES 00:02:29.926 Checking for size of "void *" : 8 00:02:29.926 Checking for size of "void *" : 8 (cached) 00:02:29.926 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:29.926 Library m found: YES 00:02:29.926 Library numa found: YES 00:02:29.926 Has header "numaif.h" : YES 00:02:29.926 Library fdt found: NO 00:02:29.926 Library execinfo found: NO 00:02:29.926 Has header "execinfo.h" : YES 00:02:29.926 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:29.926 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.926 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.926 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.926 Run-time dependency openssl found: YES 3.0.9 00:02:29.926 Run-time dependency libpcap found: YES 1.10.4 00:02:29.926 Has header "pcap.h" with dependency libpcap: YES 00:02:29.926 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.926 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.926 Compiler for C supports arguments -Wformat: YES 00:02:29.926 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.926 Compiler for C supports arguments -Wformat-security: NO 00:02:29.926 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.926 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.926 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.926 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.926 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.926 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.926 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.926 Compiler for C supports arguments -Wundef: YES 00:02:29.926 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.926 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.926 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.926 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.926 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.926 Program objdump found: YES (/usr/bin/objdump) 00:02:29.926 Compiler for C supports arguments -mavx512f: YES 00:02:29.926 Checking if "AVX512 checking" compiles: YES 00:02:29.926 Fetching value of define "__SSE4_2__" : 1 00:02:29.926 Fetching value of define "__AES__" : 1 00:02:29.926 Fetching value of define "__AVX__" : 1 00:02:29.926 Fetching value of define "__AVX2__" : 1 00:02:29.926 Fetching value of define "__AVX512BW__" : (undefined) 00:02:29.926 Fetching value of define "__AVX512CD__" : (undefined) 00:02:29.926 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:29.926 Fetching value of define "__AVX512F__" : (undefined) 00:02:29.926 Fetching value of define "__AVX512VL__" : (undefined) 00:02:29.926 Fetching value of define "__PCLMUL__" : 1 00:02:29.926 Fetching value of define "__RDRND__" : 1 00:02:29.926 Fetching value of define "__RDSEED__" : 1 00:02:29.926 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:29.926 Fetching value of define "__znver1__" : (undefined) 00:02:29.926 Fetching value of define "__znver2__" : (undefined) 00:02:29.926 Fetching value of define "__znver3__" : (undefined) 00:02:29.926 Fetching value of define "__znver4__" : (undefined) 00:02:29.926 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.926 Message: lib/log: Defining dependency "log" 00:02:29.926 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.926 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.926 Checking for function "getentropy" : NO 00:02:29.926 Message: lib/eal: Defining dependency "eal" 00:02:29.926 Message: lib/ring: Defining dependency "ring" 00:02:29.926 Message: lib/rcu: Defining dependency "rcu" 00:02:29.926 Message: lib/mempool: Defining dependency "mempool" 00:02:29.926 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.926 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.926 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.926 Compiler for C supports arguments -mpclmul: YES 00:02:29.926 Compiler for C supports arguments -maes: YES 00:02:29.926 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.926 Compiler for C supports arguments -mavx512bw: YES 00:02:29.926 Compiler for C supports arguments -mavx512dq: YES 00:02:29.926 Compiler for C supports arguments -mavx512vl: YES 00:02:29.926 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.926 Compiler for C supports arguments -mavx2: YES 00:02:29.926 Compiler for C supports arguments -mavx: YES 00:02:29.926 Message: lib/net: Defining dependency "net" 00:02:29.926 Message: lib/meter: Defining dependency "meter" 00:02:29.926 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.926 Message: lib/pci: Defining dependency "pci" 00:02:29.926 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.926 Message: lib/hash: Defining dependency "hash" 00:02:29.926 Message: lib/timer: Defining dependency "timer" 00:02:29.926 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.926 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.926 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.926 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.926 Message: lib/power: Defining dependency "power" 00:02:29.926 Message: lib/reorder: Defining dependency "reorder" 00:02:29.926 Message: lib/security: Defining dependency "security" 00:02:29.926 Has header "linux/userfaultfd.h" : YES 00:02:29.926 Has header "linux/vduse.h" : YES 00:02:29.926 Message: lib/vhost: Defining dependency "vhost" 00:02:29.926 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.926 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.926 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.926 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.926 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:29.926 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:29.926 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:29.926 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:29.926 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:29.926 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:29.926 Program doxygen found: YES (/usr/bin/doxygen) 00:02:29.926 Configuring doxy-api-html.conf using configuration 00:02:29.926 Configuring doxy-api-man.conf using configuration 00:02:29.926 Program mandb found: YES (/usr/bin/mandb) 00:02:29.926 Program sphinx-build found: NO 00:02:29.926 Configuring rte_build_config.h using configuration 00:02:29.926 Message: 00:02:29.926 ================= 00:02:29.926 Applications Enabled 00:02:29.926 ================= 00:02:29.926 00:02:29.926 apps: 00:02:29.926 00:02:29.926 00:02:29.926 Message: 00:02:29.926 ================= 00:02:29.926 Libraries Enabled 00:02:29.926 ================= 00:02:29.926 00:02:29.927 libs: 00:02:29.927 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:29.927 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:29.927 cryptodev, dmadev, power, reorder, security, vhost, 00:02:29.927 00:02:29.927 Message: 00:02:29.927 =============== 00:02:29.927 Drivers Enabled 00:02:29.927 =============== 00:02:29.927 00:02:29.927 common: 00:02:29.927 00:02:29.927 bus: 00:02:29.927 pci, vdev, 00:02:29.927 mempool: 00:02:29.927 ring, 00:02:29.927 dma: 00:02:29.927 00:02:29.927 net: 00:02:29.927 00:02:29.927 crypto: 00:02:29.927 00:02:29.927 compress: 00:02:29.927 00:02:29.927 vdpa: 00:02:29.927 00:02:29.927 00:02:29.927 Message: 00:02:29.927 ================= 00:02:29.927 Content Skipped 00:02:29.927 ================= 00:02:29.927 00:02:29.927 apps: 00:02:29.927 dumpcap: explicitly disabled via build config 00:02:29.927 graph: explicitly disabled via build config 00:02:29.927 pdump: explicitly disabled via build config 00:02:29.927 proc-info: explicitly disabled via build config 00:02:29.927 test-acl: explicitly disabled via build config 00:02:29.927 test-bbdev: explicitly disabled via build config 00:02:29.927 test-cmdline: explicitly disabled via build config 00:02:29.927 test-compress-perf: explicitly disabled via build config 00:02:29.927 test-crypto-perf: explicitly disabled via build config 00:02:29.927 test-dma-perf: explicitly disabled via build config 00:02:29.927 test-eventdev: explicitly disabled via build config 00:02:29.927 test-fib: explicitly disabled via build config 00:02:29.927 test-flow-perf: explicitly disabled via build config 00:02:29.927 test-gpudev: explicitly disabled via build config 00:02:29.927 test-mldev: explicitly disabled via build config 00:02:29.927 test-pipeline: explicitly disabled via build config 00:02:29.927 test-pmd: explicitly disabled via build config 00:02:29.927 test-regex: explicitly disabled via build config 00:02:29.927 test-sad: explicitly disabled via build config 00:02:29.927 test-security-perf: explicitly disabled via build config 00:02:29.927 00:02:29.927 libs: 00:02:29.927 argparse: explicitly disabled via build config 00:02:29.927 metrics: explicitly disabled via build config 00:02:29.927 acl: explicitly disabled via build config 00:02:29.927 bbdev: explicitly disabled via build config 00:02:29.927 bitratestats: explicitly disabled via build config 00:02:29.927 bpf: explicitly disabled via build config 00:02:29.927 cfgfile: explicitly disabled via build config 00:02:29.927 distributor: explicitly disabled via build config 00:02:29.927 efd: explicitly disabled via build config 00:02:29.927 eventdev: explicitly disabled via build config 00:02:29.927 dispatcher: explicitly disabled via build config 00:02:29.927 gpudev: explicitly disabled via build config 00:02:29.927 gro: explicitly disabled via build config 00:02:29.927 gso: explicitly disabled via build config 00:02:29.927 ip_frag: explicitly disabled via build config 00:02:29.927 jobstats: explicitly disabled via build config 00:02:29.927 latencystats: explicitly disabled via build config 00:02:29.927 lpm: explicitly disabled via build config 00:02:29.927 member: explicitly disabled via build config 00:02:29.927 pcapng: explicitly disabled via build config 00:02:29.927 rawdev: explicitly disabled via build config 00:02:29.927 regexdev: explicitly disabled via build config 00:02:29.927 mldev: explicitly disabled via build config 00:02:29.927 rib: explicitly disabled via build config 00:02:29.927 sched: explicitly disabled via build config 00:02:29.927 stack: explicitly disabled via build config 00:02:29.927 ipsec: explicitly disabled via build config 00:02:29.927 pdcp: explicitly disabled via build config 00:02:29.927 fib: explicitly disabled via build config 00:02:29.927 port: explicitly disabled via build config 00:02:29.927 pdump: explicitly disabled via build config 00:02:29.927 table: explicitly disabled via build config 00:02:29.927 pipeline: explicitly disabled via build config 00:02:29.927 graph: explicitly disabled via build config 00:02:29.927 node: explicitly disabled via build config 00:02:29.927 00:02:29.927 drivers: 00:02:29.927 common/cpt: not in enabled drivers build config 00:02:29.927 common/dpaax: not in enabled drivers build config 00:02:29.927 common/iavf: not in enabled drivers build config 00:02:29.927 common/idpf: not in enabled drivers build config 00:02:29.927 common/ionic: not in enabled drivers build config 00:02:29.927 common/mvep: not in enabled drivers build config 00:02:29.927 common/octeontx: not in enabled drivers build config 00:02:29.927 bus/auxiliary: not in enabled drivers build config 00:02:29.927 bus/cdx: not in enabled drivers build config 00:02:29.927 bus/dpaa: not in enabled drivers build config 00:02:29.927 bus/fslmc: not in enabled drivers build config 00:02:29.927 bus/ifpga: not in enabled drivers build config 00:02:29.927 bus/platform: not in enabled drivers build config 00:02:29.927 bus/uacce: not in enabled drivers build config 00:02:29.927 bus/vmbus: not in enabled drivers build config 00:02:29.927 common/cnxk: not in enabled drivers build config 00:02:29.927 common/mlx5: not in enabled drivers build config 00:02:29.927 common/nfp: not in enabled drivers build config 00:02:29.927 common/nitrox: not in enabled drivers build config 00:02:29.927 common/qat: not in enabled drivers build config 00:02:29.927 common/sfc_efx: not in enabled drivers build config 00:02:29.927 mempool/bucket: not in enabled drivers build config 00:02:29.927 mempool/cnxk: not in enabled drivers build config 00:02:29.927 mempool/dpaa: not in enabled drivers build config 00:02:29.927 mempool/dpaa2: not in enabled drivers build config 00:02:29.927 mempool/octeontx: not in enabled drivers build config 00:02:29.927 mempool/stack: not in enabled drivers build config 00:02:29.927 dma/cnxk: not in enabled drivers build config 00:02:29.927 dma/dpaa: not in enabled drivers build config 00:02:29.927 dma/dpaa2: not in enabled drivers build config 00:02:29.927 dma/hisilicon: not in enabled drivers build config 00:02:29.927 dma/idxd: not in enabled drivers build config 00:02:29.927 dma/ioat: not in enabled drivers build config 00:02:29.927 dma/skeleton: not in enabled drivers build config 00:02:29.927 net/af_packet: not in enabled drivers build config 00:02:29.927 net/af_xdp: not in enabled drivers build config 00:02:29.927 net/ark: not in enabled drivers build config 00:02:29.927 net/atlantic: not in enabled drivers build config 00:02:29.927 net/avp: not in enabled drivers build config 00:02:29.927 net/axgbe: not in enabled drivers build config 00:02:29.927 net/bnx2x: not in enabled drivers build config 00:02:29.927 net/bnxt: not in enabled drivers build config 00:02:29.927 net/bonding: not in enabled drivers build config 00:02:29.927 net/cnxk: not in enabled drivers build config 00:02:29.927 net/cpfl: not in enabled drivers build config 00:02:29.927 net/cxgbe: not in enabled drivers build config 00:02:29.927 net/dpaa: not in enabled drivers build config 00:02:29.927 net/dpaa2: not in enabled drivers build config 00:02:29.927 net/e1000: not in enabled drivers build config 00:02:29.927 net/ena: not in enabled drivers build config 00:02:29.927 net/enetc: not in enabled drivers build config 00:02:29.927 net/enetfec: not in enabled drivers build config 00:02:29.927 net/enic: not in enabled drivers build config 00:02:29.927 net/failsafe: not in enabled drivers build config 00:02:29.927 net/fm10k: not in enabled drivers build config 00:02:29.927 net/gve: not in enabled drivers build config 00:02:29.927 net/hinic: not in enabled drivers build config 00:02:29.927 net/hns3: not in enabled drivers build config 00:02:29.927 net/i40e: not in enabled drivers build config 00:02:29.927 net/iavf: not in enabled drivers build config 00:02:29.927 net/ice: not in enabled drivers build config 00:02:29.927 net/idpf: not in enabled drivers build config 00:02:29.927 net/igc: not in enabled drivers build config 00:02:29.927 net/ionic: not in enabled drivers build config 00:02:29.928 net/ipn3ke: not in enabled drivers build config 00:02:29.928 net/ixgbe: not in enabled drivers build config 00:02:29.928 net/mana: not in enabled drivers build config 00:02:29.928 net/memif: not in enabled drivers build config 00:02:29.928 net/mlx4: not in enabled drivers build config 00:02:29.928 net/mlx5: not in enabled drivers build config 00:02:29.928 net/mvneta: not in enabled drivers build config 00:02:29.928 net/mvpp2: not in enabled drivers build config 00:02:29.928 net/netvsc: not in enabled drivers build config 00:02:29.928 net/nfb: not in enabled drivers build config 00:02:29.928 net/nfp: not in enabled drivers build config 00:02:29.928 net/ngbe: not in enabled drivers build config 00:02:29.928 net/null: not in enabled drivers build config 00:02:29.928 net/octeontx: not in enabled drivers build config 00:02:29.928 net/octeon_ep: not in enabled drivers build config 00:02:29.928 net/pcap: not in enabled drivers build config 00:02:29.928 net/pfe: not in enabled drivers build config 00:02:29.928 net/qede: not in enabled drivers build config 00:02:29.928 net/ring: not in enabled drivers build config 00:02:29.928 net/sfc: not in enabled drivers build config 00:02:29.928 net/softnic: not in enabled drivers build config 00:02:29.928 net/tap: not in enabled drivers build config 00:02:29.928 net/thunderx: not in enabled drivers build config 00:02:29.928 net/txgbe: not in enabled drivers build config 00:02:29.928 net/vdev_netvsc: not in enabled drivers build config 00:02:29.928 net/vhost: not in enabled drivers build config 00:02:29.928 net/virtio: not in enabled drivers build config 00:02:29.928 net/vmxnet3: not in enabled drivers build config 00:02:29.928 raw/*: missing internal dependency, "rawdev" 00:02:29.928 crypto/armv8: not in enabled drivers build config 00:02:29.928 crypto/bcmfs: not in enabled drivers build config 00:02:29.928 crypto/caam_jr: not in enabled drivers build config 00:02:29.928 crypto/ccp: not in enabled drivers build config 00:02:29.928 crypto/cnxk: not in enabled drivers build config 00:02:29.928 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.928 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.928 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.928 crypto/mlx5: not in enabled drivers build config 00:02:29.928 crypto/mvsam: not in enabled drivers build config 00:02:29.928 crypto/nitrox: not in enabled drivers build config 00:02:29.928 crypto/null: not in enabled drivers build config 00:02:29.928 crypto/octeontx: not in enabled drivers build config 00:02:29.928 crypto/openssl: not in enabled drivers build config 00:02:29.928 crypto/scheduler: not in enabled drivers build config 00:02:29.928 crypto/uadk: not in enabled drivers build config 00:02:29.928 crypto/virtio: not in enabled drivers build config 00:02:29.928 compress/isal: not in enabled drivers build config 00:02:29.928 compress/mlx5: not in enabled drivers build config 00:02:29.928 compress/nitrox: not in enabled drivers build config 00:02:29.928 compress/octeontx: not in enabled drivers build config 00:02:29.928 compress/zlib: not in enabled drivers build config 00:02:29.928 regex/*: missing internal dependency, "regexdev" 00:02:29.928 ml/*: missing internal dependency, "mldev" 00:02:29.928 vdpa/ifc: not in enabled drivers build config 00:02:29.928 vdpa/mlx5: not in enabled drivers build config 00:02:29.928 vdpa/nfp: not in enabled drivers build config 00:02:29.928 vdpa/sfc: not in enabled drivers build config 00:02:29.928 event/*: missing internal dependency, "eventdev" 00:02:29.928 baseband/*: missing internal dependency, "bbdev" 00:02:29.928 gpu/*: missing internal dependency, "gpudev" 00:02:29.928 00:02:29.928 00:02:29.928 Build targets in project: 85 00:02:29.928 00:02:29.928 DPDK 24.03.0 00:02:29.928 00:02:29.928 User defined options 00:02:29.928 buildtype : debug 00:02:29.928 default_library : shared 00:02:29.928 libdir : lib 00:02:29.928 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:29.928 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:29.928 c_link_args : 00:02:29.928 cpu_instruction_set: native 00:02:29.928 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:29.928 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:29.928 enable_docs : false 00:02:29.928 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:29.928 enable_kmods : false 00:02:29.928 max_lcores : 128 00:02:29.928 tests : false 00:02:29.928 00:02:29.928 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.928 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:29.928 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.928 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.928 [3/268] Linking static target lib/librte_kvargs.a 00:02:29.928 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.928 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.928 [6/268] Linking static target lib/librte_log.a 00:02:29.928 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.928 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.928 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.928 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.928 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.928 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.928 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.928 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:29.928 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.928 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.928 [17/268] Linking target lib/librte_log.so.24.1 00:02:30.191 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.191 [19/268] Linking static target lib/librte_telemetry.a 00:02:30.191 [20/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.191 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.191 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:30.450 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.450 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.708 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.708 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.966 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.966 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.966 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.966 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.966 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:31.223 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.223 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:31.223 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:31.482 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.482 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.482 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.482 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.482 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.740 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.740 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.740 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.998 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.998 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:31.998 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.998 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.256 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.256 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.256 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:32.256 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.513 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.513 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:32.773 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.051 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:33.051 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:33.051 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.051 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:33.051 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.331 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.331 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:33.331 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.331 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:33.589 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.847 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.847 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.847 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:34.105 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:34.105 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:34.105 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:34.379 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:34.379 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.379 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:34.650 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.650 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.650 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:34.650 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.650 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.907 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.907 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.165 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:35.165 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.165 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.422 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.422 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.422 [85/268] Linking static target lib/librte_ring.a 00:02:35.422 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.422 [87/268] Linking static target lib/librte_eal.a 00:02:35.680 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.680 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.680 [90/268] Linking static target lib/librte_rcu.a 00:02:35.680 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:35.938 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.938 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:36.195 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.195 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.195 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:36.195 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:36.195 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.195 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:36.195 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:36.195 [101/268] Linking static target lib/librte_mempool.a 00:02:36.195 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:36.454 [103/268] Linking static target lib/librte_mbuf.a 00:02:36.714 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:36.714 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:36.714 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:36.973 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:36.973 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.973 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:36.973 [110/268] Linking static target lib/librte_net.a 00:02:36.973 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:37.231 [112/268] Linking static target lib/librte_meter.a 00:02:37.231 [113/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.490 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.490 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.490 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:37.490 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.490 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.490 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.056 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.056 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.313 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.313 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:38.571 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.571 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.571 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.571 [127/268] Linking static target lib/librte_pci.a 00:02:38.830 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.830 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.830 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:38.830 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:38.830 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.830 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:38.830 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.087 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.087 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.087 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.087 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.087 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.087 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.087 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.087 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.087 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.087 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.087 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.345 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.345 [147/268] Linking static target lib/librte_ethdev.a 00:02:39.345 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.603 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.603 [150/268] Linking static target lib/librte_cmdline.a 00:02:39.603 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.603 [152/268] Linking static target lib/librte_timer.a 00:02:39.861 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:39.861 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.861 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:39.861 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:39.861 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.428 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.428 [159/268] Linking static target lib/librte_hash.a 00:02:40.428 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.428 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.428 [162/268] Linking static target lib/librte_compressdev.a 00:02:40.428 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.686 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.686 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:40.686 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.686 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:40.943 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:40.943 [169/268] Linking static target lib/librte_dmadev.a 00:02:41.201 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.201 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.201 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:41.201 [173/268] Linking static target lib/librte_cryptodev.a 00:02:41.201 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:41.458 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.458 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.458 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.458 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:41.716 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.716 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:41.974 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:41.974 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:41.974 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:41.974 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.232 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.232 [186/268] Linking static target lib/librte_power.a 00:02:42.232 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:42.232 [188/268] Linking static target lib/librte_reorder.a 00:02:42.491 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:42.491 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:42.491 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:42.491 [192/268] Linking static target lib/librte_security.a 00:02:42.491 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:42.749 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.065 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.065 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.065 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.336 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.336 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.336 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:43.594 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:43.594 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:43.594 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:43.853 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.853 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:43.853 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.111 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.111 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.111 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.111 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.111 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.111 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:44.370 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.370 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.370 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.370 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:44.370 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.370 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.370 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.370 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:44.370 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.370 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.629 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.629 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.629 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.629 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.629 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:44.886 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.820 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.820 [230/268] Linking static target lib/librte_vhost.a 00:02:46.078 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.078 [232/268] Linking target lib/librte_eal.so.24.1 00:02:46.078 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:46.336 [234/268] Linking target lib/librte_meter.so.24.1 00:02:46.336 [235/268] Linking target lib/librte_timer.so.24.1 00:02:46.336 [236/268] Linking target lib/librte_ring.so.24.1 00:02:46.336 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:46.336 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:46.336 [239/268] Linking target lib/librte_pci.so.24.1 00:02:46.336 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:46.336 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:46.336 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:46.336 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:46.336 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:46.336 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:46.336 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:46.336 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:46.595 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:46.595 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:46.595 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:46.595 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:46.595 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.854 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:46.854 [254/268] Linking target lib/librte_net.so.24.1 00:02:46.854 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:46.854 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:46.854 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:46.854 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:47.112 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:47.112 [260/268] Linking target lib/librte_hash.so.24.1 00:02:47.112 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:47.112 [262/268] Linking target lib/librte_security.so.24.1 00:02:47.112 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:47.112 [264/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.112 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:47.112 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:47.370 [267/268] Linking target lib/librte_power.so.24.1 00:02:47.370 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:47.370 INFO: autodetecting backend as ninja 00:02:47.370 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:48.306 CC lib/log/log.o 00:02:48.306 CC lib/ut/ut.o 00:02:48.306 CC lib/log/log_flags.o 00:02:48.306 CC lib/log/log_deprecated.o 00:02:48.306 CC lib/ut_mock/mock.o 00:02:48.565 LIB libspdk_ut.a 00:02:48.565 LIB libspdk_ut_mock.a 00:02:48.565 LIB libspdk_log.a 00:02:48.565 SO libspdk_ut_mock.so.6.0 00:02:48.565 SO libspdk_ut.so.2.0 00:02:48.565 SO libspdk_log.so.7.0 00:02:48.565 SYMLINK libspdk_ut_mock.so 00:02:48.565 SYMLINK libspdk_ut.so 00:02:48.823 SYMLINK libspdk_log.so 00:02:48.823 CC lib/ioat/ioat.o 00:02:48.823 CC lib/dma/dma.o 00:02:48.823 CC lib/util/base64.o 00:02:48.823 CXX lib/trace_parser/trace.o 00:02:48.823 CC lib/util/bit_array.o 00:02:48.823 CC lib/util/cpuset.o 00:02:48.823 CC lib/util/crc32.o 00:02:48.823 CC lib/util/crc16.o 00:02:48.823 CC lib/util/crc32c.o 00:02:49.081 CC lib/vfio_user/host/vfio_user_pci.o 00:02:49.081 CC lib/util/crc32_ieee.o 00:02:49.081 CC lib/util/crc64.o 00:02:49.081 CC lib/vfio_user/host/vfio_user.o 00:02:49.081 CC lib/util/dif.o 00:02:49.081 CC lib/util/fd.o 00:02:49.081 LIB libspdk_dma.a 00:02:49.081 SO libspdk_dma.so.4.0 00:02:49.338 SYMLINK libspdk_dma.so 00:02:49.338 CC lib/util/file.o 00:02:49.338 CC lib/util/hexlify.o 00:02:49.338 CC lib/util/iov.o 00:02:49.338 CC lib/util/math.o 00:02:49.338 CC lib/util/pipe.o 00:02:49.338 CC lib/util/strerror_tls.o 00:02:49.338 LIB libspdk_ioat.a 00:02:49.338 SO libspdk_ioat.so.7.0 00:02:49.338 CC lib/util/string.o 00:02:49.338 LIB libspdk_vfio_user.a 00:02:49.338 CC lib/util/uuid.o 00:02:49.338 CC lib/util/fd_group.o 00:02:49.338 SYMLINK libspdk_ioat.so 00:02:49.338 SO libspdk_vfio_user.so.5.0 00:02:49.338 CC lib/util/xor.o 00:02:49.338 CC lib/util/zipf.o 00:02:49.594 SYMLINK libspdk_vfio_user.so 00:02:49.594 LIB libspdk_util.a 00:02:49.851 SO libspdk_util.so.9.1 00:02:49.851 SYMLINK libspdk_util.so 00:02:50.109 LIB libspdk_trace_parser.a 00:02:50.109 SO libspdk_trace_parser.so.5.0 00:02:50.109 CC lib/rdma_provider/common.o 00:02:50.109 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:50.109 CC lib/rdma_utils/rdma_utils.o 00:02:50.109 CC lib/idxd/idxd_user.o 00:02:50.109 CC lib/env_dpdk/env.o 00:02:50.109 CC lib/idxd/idxd.o 00:02:50.109 CC lib/conf/conf.o 00:02:50.109 CC lib/vmd/vmd.o 00:02:50.109 CC lib/json/json_parse.o 00:02:50.366 SYMLINK libspdk_trace_parser.so 00:02:50.366 CC lib/idxd/idxd_kernel.o 00:02:50.366 CC lib/json/json_util.o 00:02:50.366 LIB libspdk_rdma_provider.a 00:02:50.366 SO libspdk_rdma_provider.so.6.0 00:02:50.366 LIB libspdk_conf.a 00:02:50.366 CC lib/json/json_write.o 00:02:50.366 SO libspdk_conf.so.6.0 00:02:50.366 LIB libspdk_rdma_utils.a 00:02:50.366 CC lib/env_dpdk/memory.o 00:02:50.366 CC lib/env_dpdk/pci.o 00:02:50.366 SYMLINK libspdk_rdma_provider.so 00:02:50.366 CC lib/env_dpdk/init.o 00:02:50.366 SO libspdk_rdma_utils.so.1.0 00:02:50.366 SYMLINK libspdk_conf.so 00:02:50.366 CC lib/env_dpdk/threads.o 00:02:50.624 SYMLINK libspdk_rdma_utils.so 00:02:50.624 CC lib/env_dpdk/pci_ioat.o 00:02:50.624 CC lib/env_dpdk/pci_virtio.o 00:02:50.624 CC lib/env_dpdk/pci_vmd.o 00:02:50.624 LIB libspdk_idxd.a 00:02:50.624 LIB libspdk_json.a 00:02:50.624 CC lib/env_dpdk/pci_idxd.o 00:02:50.624 CC lib/env_dpdk/pci_event.o 00:02:50.624 SO libspdk_idxd.so.12.0 00:02:50.624 SO libspdk_json.so.6.0 00:02:50.882 CC lib/vmd/led.o 00:02:50.882 SYMLINK libspdk_json.so 00:02:50.882 SYMLINK libspdk_idxd.so 00:02:50.882 CC lib/env_dpdk/sigbus_handler.o 00:02:50.882 CC lib/env_dpdk/pci_dpdk.o 00:02:50.882 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:50.882 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:50.882 LIB libspdk_vmd.a 00:02:50.882 CC lib/jsonrpc/jsonrpc_server.o 00:02:50.882 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:50.882 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:50.882 CC lib/jsonrpc/jsonrpc_client.o 00:02:50.882 SO libspdk_vmd.so.6.0 00:02:51.138 SYMLINK libspdk_vmd.so 00:02:51.138 LIB libspdk_jsonrpc.a 00:02:51.395 SO libspdk_jsonrpc.so.6.0 00:02:51.396 SYMLINK libspdk_jsonrpc.so 00:02:51.652 LIB libspdk_env_dpdk.a 00:02:51.652 CC lib/rpc/rpc.o 00:02:51.652 SO libspdk_env_dpdk.so.14.1 00:02:51.908 SYMLINK libspdk_env_dpdk.so 00:02:51.909 LIB libspdk_rpc.a 00:02:51.909 SO libspdk_rpc.so.6.0 00:02:51.909 SYMLINK libspdk_rpc.so 00:02:52.165 CC lib/keyring/keyring.o 00:02:52.165 CC lib/keyring/keyring_rpc.o 00:02:52.165 CC lib/notify/notify.o 00:02:52.165 CC lib/notify/notify_rpc.o 00:02:52.165 CC lib/trace/trace.o 00:02:52.165 CC lib/trace/trace_flags.o 00:02:52.165 CC lib/trace/trace_rpc.o 00:02:52.422 LIB libspdk_notify.a 00:02:52.422 SO libspdk_notify.so.6.0 00:02:52.422 SYMLINK libspdk_notify.so 00:02:52.422 LIB libspdk_trace.a 00:02:52.422 LIB libspdk_keyring.a 00:02:52.422 SO libspdk_keyring.so.1.0 00:02:52.422 SO libspdk_trace.so.10.0 00:02:52.715 SYMLINK libspdk_keyring.so 00:02:52.715 SYMLINK libspdk_trace.so 00:02:52.715 CC lib/sock/sock_rpc.o 00:02:52.715 CC lib/sock/sock.o 00:02:52.715 CC lib/thread/thread.o 00:02:52.715 CC lib/thread/iobuf.o 00:02:53.282 LIB libspdk_sock.a 00:02:53.282 SO libspdk_sock.so.10.0 00:02:53.539 SYMLINK libspdk_sock.so 00:02:53.798 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:53.798 CC lib/nvme/nvme_ctrlr.o 00:02:53.798 CC lib/nvme/nvme_fabric.o 00:02:53.798 CC lib/nvme/nvme_ns_cmd.o 00:02:53.798 CC lib/nvme/nvme_ns.o 00:02:53.798 CC lib/nvme/nvme_pcie_common.o 00:02:53.798 CC lib/nvme/nvme_pcie.o 00:02:53.798 CC lib/nvme/nvme.o 00:02:53.798 CC lib/nvme/nvme_qpair.o 00:02:54.363 CC lib/nvme/nvme_quirks.o 00:02:54.621 LIB libspdk_thread.a 00:02:54.621 SO libspdk_thread.so.10.1 00:02:54.621 SYMLINK libspdk_thread.so 00:02:54.621 CC lib/nvme/nvme_transport.o 00:02:54.621 CC lib/nvme/nvme_discovery.o 00:02:54.621 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.879 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.879 CC lib/nvme/nvme_tcp.o 00:02:54.879 CC lib/accel/accel.o 00:02:54.879 CC lib/blob/blobstore.o 00:02:54.879 CC lib/nvme/nvme_opal.o 00:02:55.137 CC lib/nvme/nvme_io_msg.o 00:02:55.395 CC lib/nvme/nvme_poll_group.o 00:02:55.395 CC lib/nvme/nvme_zns.o 00:02:55.395 CC lib/nvme/nvme_stubs.o 00:02:55.395 CC lib/blob/request.o 00:02:55.653 CC lib/init/json_config.o 00:02:55.911 CC lib/init/subsystem.o 00:02:55.911 CC lib/init/subsystem_rpc.o 00:02:55.911 CC lib/accel/accel_rpc.o 00:02:55.911 CC lib/accel/accel_sw.o 00:02:55.911 CC lib/init/rpc.o 00:02:55.911 CC lib/nvme/nvme_auth.o 00:02:55.911 CC lib/blob/zeroes.o 00:02:56.169 CC lib/blob/blob_bs_dev.o 00:02:56.169 CC lib/nvme/nvme_cuse.o 00:02:56.169 CC lib/nvme/nvme_rdma.o 00:02:56.169 CC lib/virtio/virtio.o 00:02:56.169 LIB libspdk_accel.a 00:02:56.169 LIB libspdk_init.a 00:02:56.169 SO libspdk_accel.so.15.1 00:02:56.169 SO libspdk_init.so.5.0 00:02:56.169 CC lib/virtio/virtio_vhost_user.o 00:02:56.169 SYMLINK libspdk_accel.so 00:02:56.427 CC lib/virtio/virtio_vfio_user.o 00:02:56.427 SYMLINK libspdk_init.so 00:02:56.427 CC lib/bdev/bdev.o 00:02:56.427 CC lib/bdev/bdev_rpc.o 00:02:56.427 CC lib/event/app.o 00:02:56.427 CC lib/virtio/virtio_pci.o 00:02:56.685 CC lib/bdev/bdev_zone.o 00:02:56.685 CC lib/bdev/part.o 00:02:56.685 CC lib/bdev/scsi_nvme.o 00:02:56.943 CC lib/event/reactor.o 00:02:56.943 LIB libspdk_virtio.a 00:02:56.943 CC lib/event/log_rpc.o 00:02:56.943 SO libspdk_virtio.so.7.0 00:02:56.943 CC lib/event/app_rpc.o 00:02:56.943 CC lib/event/scheduler_static.o 00:02:56.943 SYMLINK libspdk_virtio.so 00:02:57.202 LIB libspdk_event.a 00:02:57.202 SO libspdk_event.so.14.0 00:02:57.461 SYMLINK libspdk_event.so 00:02:57.461 LIB libspdk_nvme.a 00:02:57.719 SO libspdk_nvme.so.13.1 00:02:57.978 LIB libspdk_blob.a 00:02:57.978 SO libspdk_blob.so.11.0 00:02:57.978 SYMLINK libspdk_nvme.so 00:02:57.978 SYMLINK libspdk_blob.so 00:02:58.236 CC lib/lvol/lvol.o 00:02:58.236 CC lib/blobfs/blobfs.o 00:02:58.236 CC lib/blobfs/tree.o 00:02:59.171 LIB libspdk_bdev.a 00:02:59.171 LIB libspdk_blobfs.a 00:02:59.171 SO libspdk_blobfs.so.10.0 00:02:59.171 LIB libspdk_lvol.a 00:02:59.171 SO libspdk_bdev.so.15.1 00:02:59.171 SO libspdk_lvol.so.10.0 00:02:59.171 SYMLINK libspdk_blobfs.so 00:02:59.429 SYMLINK libspdk_bdev.so 00:02:59.429 SYMLINK libspdk_lvol.so 00:02:59.429 CC lib/ublk/ublk.o 00:02:59.429 CC lib/ublk/ublk_rpc.o 00:02:59.429 CC lib/ftl/ftl_core.o 00:02:59.429 CC lib/ftl/ftl_init.o 00:02:59.429 CC lib/ftl/ftl_layout.o 00:02:59.429 CC lib/nvmf/ctrlr.o 00:02:59.429 CC lib/scsi/dev.o 00:02:59.429 CC lib/nbd/nbd.o 00:02:59.429 CC lib/ftl/ftl_debug.o 00:02:59.429 CC lib/nvmf/ctrlr_discovery.o 00:02:59.687 CC lib/nvmf/ctrlr_bdev.o 00:02:59.687 CC lib/ftl/ftl_io.o 00:02:59.687 CC lib/scsi/lun.o 00:02:59.687 CC lib/scsi/port.o 00:02:59.945 CC lib/scsi/scsi.o 00:02:59.945 CC lib/nbd/nbd_rpc.o 00:02:59.945 CC lib/ftl/ftl_sb.o 00:02:59.945 CC lib/nvmf/subsystem.o 00:02:59.946 CC lib/scsi/scsi_bdev.o 00:02:59.946 CC lib/ftl/ftl_l2p.o 00:03:00.204 CC lib/ftl/ftl_l2p_flat.o 00:03:00.204 CC lib/scsi/scsi_pr.o 00:03:00.204 LIB libspdk_nbd.a 00:03:00.204 SO libspdk_nbd.so.7.0 00:03:00.204 CC lib/scsi/scsi_rpc.o 00:03:00.204 SYMLINK libspdk_nbd.so 00:03:00.204 CC lib/scsi/task.o 00:03:00.204 CC lib/ftl/ftl_nv_cache.o 00:03:00.204 LIB libspdk_ublk.a 00:03:00.204 SO libspdk_ublk.so.3.0 00:03:00.462 CC lib/nvmf/nvmf.o 00:03:00.462 CC lib/nvmf/nvmf_rpc.o 00:03:00.462 CC lib/nvmf/transport.o 00:03:00.462 SYMLINK libspdk_ublk.so 00:03:00.462 CC lib/nvmf/tcp.o 00:03:00.462 CC lib/nvmf/stubs.o 00:03:00.462 CC lib/nvmf/mdns_server.o 00:03:00.462 LIB libspdk_scsi.a 00:03:00.719 SO libspdk_scsi.so.9.0 00:03:00.719 SYMLINK libspdk_scsi.so 00:03:00.719 CC lib/ftl/ftl_band.o 00:03:00.991 CC lib/nvmf/rdma.o 00:03:00.991 CC lib/nvmf/auth.o 00:03:01.249 CC lib/iscsi/conn.o 00:03:01.249 CC lib/ftl/ftl_band_ops.o 00:03:01.249 CC lib/ftl/ftl_writer.o 00:03:01.249 CC lib/ftl/ftl_rq.o 00:03:01.508 CC lib/iscsi/init_grp.o 00:03:01.508 CC lib/vhost/vhost.o 00:03:01.508 CC lib/iscsi/iscsi.o 00:03:01.508 CC lib/iscsi/md5.o 00:03:01.508 CC lib/iscsi/param.o 00:03:01.766 CC lib/ftl/ftl_reloc.o 00:03:01.766 CC lib/iscsi/portal_grp.o 00:03:01.766 CC lib/vhost/vhost_rpc.o 00:03:01.766 CC lib/vhost/vhost_scsi.o 00:03:01.766 CC lib/ftl/ftl_l2p_cache.o 00:03:02.023 CC lib/ftl/ftl_p2l.o 00:03:02.023 CC lib/iscsi/tgt_node.o 00:03:02.023 CC lib/vhost/vhost_blk.o 00:03:02.023 CC lib/vhost/rte_vhost_user.o 00:03:02.280 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.280 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.540 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.540 CC lib/iscsi/iscsi_subsystem.o 00:03:02.540 CC lib/iscsi/iscsi_rpc.o 00:03:02.540 CC lib/iscsi/task.o 00:03:02.540 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.540 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.803 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.803 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.803 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.803 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.061 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.061 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.061 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.061 LIB libspdk_iscsi.a 00:03:03.061 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.061 CC lib/ftl/utils/ftl_conf.o 00:03:03.061 LIB libspdk_nvmf.a 00:03:03.061 SO libspdk_iscsi.so.8.0 00:03:03.061 CC lib/ftl/utils/ftl_md.o 00:03:03.061 CC lib/ftl/utils/ftl_mempool.o 00:03:03.318 CC lib/ftl/utils/ftl_property.o 00:03:03.319 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.319 SO libspdk_nvmf.so.18.1 00:03:03.319 LIB libspdk_vhost.a 00:03:03.319 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.319 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.319 SO libspdk_vhost.so.8.0 00:03:03.319 SYMLINK libspdk_iscsi.so 00:03:03.319 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.319 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.319 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.319 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.319 SYMLINK libspdk_vhost.so 00:03:03.576 SYMLINK libspdk_nvmf.so 00:03:03.576 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.576 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.576 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.576 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.576 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.576 CC lib/ftl/base/ftl_base_dev.o 00:03:03.576 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.576 CC lib/ftl/ftl_trace.o 00:03:03.834 LIB libspdk_ftl.a 00:03:04.092 SO libspdk_ftl.so.9.0 00:03:04.351 SYMLINK libspdk_ftl.so 00:03:04.917 CC module/env_dpdk/env_dpdk_rpc.o 00:03:04.917 CC module/accel/iaa/accel_iaa.o 00:03:04.917 CC module/sock/uring/uring.o 00:03:04.917 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.917 CC module/sock/posix/posix.o 00:03:04.917 CC module/blob/bdev/blob_bdev.o 00:03:04.917 CC module/accel/dsa/accel_dsa.o 00:03:04.917 CC module/keyring/file/keyring.o 00:03:04.917 CC module/accel/error/accel_error.o 00:03:04.917 CC module/accel/ioat/accel_ioat.o 00:03:04.917 LIB libspdk_env_dpdk_rpc.a 00:03:04.917 SO libspdk_env_dpdk_rpc.so.6.0 00:03:04.917 SYMLINK libspdk_env_dpdk_rpc.so 00:03:04.917 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.917 CC module/keyring/file/keyring_rpc.o 00:03:04.917 LIB libspdk_scheduler_dynamic.a 00:03:04.917 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.917 CC module/accel/error/accel_error_rpc.o 00:03:05.175 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.175 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.175 LIB libspdk_accel_ioat.a 00:03:05.175 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.175 LIB libspdk_keyring_file.a 00:03:05.175 SO libspdk_accel_ioat.so.6.0 00:03:05.175 LIB libspdk_blob_bdev.a 00:03:05.175 LIB libspdk_accel_iaa.a 00:03:05.175 SO libspdk_keyring_file.so.1.0 00:03:05.175 LIB libspdk_accel_error.a 00:03:05.175 SO libspdk_blob_bdev.so.11.0 00:03:05.175 SO libspdk_accel_iaa.so.3.0 00:03:05.175 SO libspdk_accel_error.so.2.0 00:03:05.175 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.175 SYMLINK libspdk_accel_ioat.so 00:03:05.175 SYMLINK libspdk_keyring_file.so 00:03:05.175 LIB libspdk_accel_dsa.a 00:03:05.175 SYMLINK libspdk_blob_bdev.so 00:03:05.175 SYMLINK libspdk_accel_iaa.so 00:03:05.175 SYMLINK libspdk_accel_error.so 00:03:05.175 SO libspdk_accel_dsa.so.5.0 00:03:05.175 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.432 SYMLINK libspdk_accel_dsa.so 00:03:05.432 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.432 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.432 CC module/keyring/linux/keyring.o 00:03:05.432 LIB libspdk_scheduler_gscheduler.a 00:03:05.432 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.432 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.432 LIB libspdk_sock_uring.a 00:03:05.432 CC module/bdev/error/vbdev_error.o 00:03:05.432 CC module/bdev/delay/vbdev_delay.o 00:03:05.432 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.432 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.432 CC module/bdev/gpt/gpt.o 00:03:05.432 SO libspdk_sock_uring.so.5.0 00:03:05.690 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.690 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.690 CC module/keyring/linux/keyring_rpc.o 00:03:05.690 SYMLINK libspdk_sock_uring.so 00:03:05.690 LIB libspdk_sock_posix.a 00:03:05.690 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.690 SO libspdk_sock_posix.so.6.0 00:03:05.690 CC module/bdev/malloc/bdev_malloc.o 00:03:05.690 LIB libspdk_keyring_linux.a 00:03:05.690 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.690 SYMLINK libspdk_sock_posix.so 00:03:05.690 SO libspdk_keyring_linux.so.1.0 00:03:05.690 LIB libspdk_blobfs_bdev.a 00:03:05.690 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.948 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.948 SO libspdk_blobfs_bdev.so.6.0 00:03:05.948 SYMLINK libspdk_keyring_linux.so 00:03:05.948 SYMLINK libspdk_blobfs_bdev.so 00:03:05.948 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:05.948 CC module/bdev/null/bdev_null.o 00:03:05.948 LIB libspdk_bdev_error.a 00:03:05.948 LIB libspdk_bdev_gpt.a 00:03:05.948 SO libspdk_bdev_error.so.6.0 00:03:05.948 LIB libspdk_bdev_lvol.a 00:03:06.206 CC module/bdev/nvme/bdev_nvme.o 00:03:06.206 SO libspdk_bdev_gpt.so.6.0 00:03:06.206 LIB libspdk_bdev_malloc.a 00:03:06.206 CC module/bdev/raid/bdev_raid.o 00:03:06.206 SO libspdk_bdev_lvol.so.6.0 00:03:06.206 LIB libspdk_bdev_delay.a 00:03:06.206 SO libspdk_bdev_malloc.so.6.0 00:03:06.206 SYMLINK libspdk_bdev_error.so 00:03:06.206 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.206 SO libspdk_bdev_delay.so.6.0 00:03:06.206 SYMLINK libspdk_bdev_gpt.so 00:03:06.206 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.206 SYMLINK libspdk_bdev_lvol.so 00:03:06.206 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.206 CC module/bdev/split/vbdev_split.o 00:03:06.206 SYMLINK libspdk_bdev_malloc.so 00:03:06.206 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.206 SYMLINK libspdk_bdev_delay.so 00:03:06.206 CC module/bdev/nvme/nvme_rpc.o 00:03:06.206 CC module/bdev/null/bdev_null_rpc.o 00:03:06.206 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.463 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.463 LIB libspdk_bdev_passthru.a 00:03:06.463 LIB libspdk_bdev_split.a 00:03:06.463 LIB libspdk_bdev_null.a 00:03:06.463 SO libspdk_bdev_passthru.so.6.0 00:03:06.463 SO libspdk_bdev_split.so.6.0 00:03:06.463 SO libspdk_bdev_null.so.6.0 00:03:06.463 SYMLINK libspdk_bdev_split.so 00:03:06.463 SYMLINK libspdk_bdev_null.so 00:03:06.463 SYMLINK libspdk_bdev_passthru.so 00:03:06.463 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.463 CC module/bdev/uring/bdev_uring.o 00:03:06.721 LIB libspdk_bdev_zone_block.a 00:03:06.721 CC module/bdev/aio/bdev_aio.o 00:03:06.721 SO libspdk_bdev_zone_block.so.6.0 00:03:06.721 CC module/bdev/ftl/bdev_ftl.o 00:03:06.721 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.721 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:06.721 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:06.721 SYMLINK libspdk_bdev_zone_block.so 00:03:06.721 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.721 CC module/bdev/uring/bdev_uring_rpc.o 00:03:06.978 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:06.978 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:06.978 LIB libspdk_bdev_uring.a 00:03:06.978 LIB libspdk_bdev_aio.a 00:03:06.978 LIB libspdk_bdev_ftl.a 00:03:06.978 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:06.978 SO libspdk_bdev_uring.so.6.0 00:03:06.978 SO libspdk_bdev_aio.so.6.0 00:03:06.978 SO libspdk_bdev_ftl.so.6.0 00:03:06.978 SYMLINK libspdk_bdev_uring.so 00:03:06.978 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.978 CC module/bdev/nvme/bdev_mdns_client.o 00:03:06.978 SYMLINK libspdk_bdev_aio.so 00:03:06.978 CC module/bdev/nvme/vbdev_opal.o 00:03:06.978 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.237 SYMLINK libspdk_bdev_ftl.so 00:03:07.237 CC module/bdev/raid/raid0.o 00:03:07.237 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.237 LIB libspdk_bdev_iscsi.a 00:03:07.237 CC module/bdev/raid/raid1.o 00:03:07.237 SO libspdk_bdev_iscsi.so.6.0 00:03:07.237 LIB libspdk_bdev_virtio.a 00:03:07.237 CC module/bdev/raid/concat.o 00:03:07.237 SYMLINK libspdk_bdev_iscsi.so 00:03:07.237 SO libspdk_bdev_virtio.so.6.0 00:03:07.495 SYMLINK libspdk_bdev_virtio.so 00:03:07.495 LIB libspdk_bdev_raid.a 00:03:07.495 SO libspdk_bdev_raid.so.6.0 00:03:07.754 SYMLINK libspdk_bdev_raid.so 00:03:08.321 LIB libspdk_bdev_nvme.a 00:03:08.321 SO libspdk_bdev_nvme.so.7.0 00:03:08.579 SYMLINK libspdk_bdev_nvme.so 00:03:09.145 CC module/event/subsystems/vmd/vmd.o 00:03:09.145 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.145 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.145 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.145 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.145 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.145 CC module/event/subsystems/sock/sock.o 00:03:09.145 CC module/event/subsystems/keyring/keyring.o 00:03:09.145 LIB libspdk_event_scheduler.a 00:03:09.145 LIB libspdk_event_vmd.a 00:03:09.145 SO libspdk_event_scheduler.so.4.0 00:03:09.145 LIB libspdk_event_sock.a 00:03:09.145 LIB libspdk_event_vhost_blk.a 00:03:09.145 SO libspdk_event_vmd.so.6.0 00:03:09.145 LIB libspdk_event_keyring.a 00:03:09.145 SO libspdk_event_sock.so.5.0 00:03:09.145 LIB libspdk_event_iobuf.a 00:03:09.145 SO libspdk_event_vhost_blk.so.3.0 00:03:09.145 SYMLINK libspdk_event_scheduler.so 00:03:09.403 SO libspdk_event_keyring.so.1.0 00:03:09.403 SO libspdk_event_iobuf.so.3.0 00:03:09.403 SYMLINK libspdk_event_vhost_blk.so 00:03:09.403 SYMLINK libspdk_event_vmd.so 00:03:09.403 SYMLINK libspdk_event_sock.so 00:03:09.403 SYMLINK libspdk_event_keyring.so 00:03:09.403 SYMLINK libspdk_event_iobuf.so 00:03:09.661 CC module/event/subsystems/accel/accel.o 00:03:09.920 LIB libspdk_event_accel.a 00:03:09.920 SO libspdk_event_accel.so.6.0 00:03:09.920 SYMLINK libspdk_event_accel.so 00:03:10.179 CC module/event/subsystems/bdev/bdev.o 00:03:10.437 LIB libspdk_event_bdev.a 00:03:10.437 SO libspdk_event_bdev.so.6.0 00:03:10.437 SYMLINK libspdk_event_bdev.so 00:03:10.695 CC module/event/subsystems/ublk/ublk.o 00:03:10.695 CC module/event/subsystems/scsi/scsi.o 00:03:10.695 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.695 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.695 CC module/event/subsystems/nbd/nbd.o 00:03:10.953 LIB libspdk_event_ublk.a 00:03:10.953 LIB libspdk_event_scsi.a 00:03:10.953 LIB libspdk_event_nbd.a 00:03:10.953 SO libspdk_event_ublk.so.3.0 00:03:10.953 SO libspdk_event_scsi.so.6.0 00:03:10.953 SO libspdk_event_nbd.so.6.0 00:03:10.953 LIB libspdk_event_nvmf.a 00:03:10.953 SYMLINK libspdk_event_ublk.so 00:03:10.953 SO libspdk_event_nvmf.so.6.0 00:03:10.953 SYMLINK libspdk_event_scsi.so 00:03:10.953 SYMLINK libspdk_event_nbd.so 00:03:11.211 SYMLINK libspdk_event_nvmf.so 00:03:11.211 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.211 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.469 LIB libspdk_event_vhost_scsi.a 00:03:11.469 LIB libspdk_event_iscsi.a 00:03:11.469 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.469 SO libspdk_event_iscsi.so.6.0 00:03:11.469 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.469 SYMLINK libspdk_event_iscsi.so 00:03:11.727 SO libspdk.so.6.0 00:03:11.727 SYMLINK libspdk.so 00:03:11.986 CXX app/trace/trace.o 00:03:11.986 CC app/trace_record/trace_record.o 00:03:11.986 TEST_HEADER include/spdk/accel.h 00:03:11.986 TEST_HEADER include/spdk/accel_module.h 00:03:11.986 TEST_HEADER include/spdk/assert.h 00:03:11.986 TEST_HEADER include/spdk/barrier.h 00:03:11.986 TEST_HEADER include/spdk/base64.h 00:03:11.986 TEST_HEADER include/spdk/bdev.h 00:03:11.986 TEST_HEADER include/spdk/bdev_module.h 00:03:11.986 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.986 TEST_HEADER include/spdk/bit_array.h 00:03:11.986 TEST_HEADER include/spdk/bit_pool.h 00:03:11.986 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.986 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.986 TEST_HEADER include/spdk/blobfs.h 00:03:11.986 TEST_HEADER include/spdk/blob.h 00:03:11.986 TEST_HEADER include/spdk/conf.h 00:03:11.986 TEST_HEADER include/spdk/config.h 00:03:11.986 TEST_HEADER include/spdk/cpuset.h 00:03:11.986 TEST_HEADER include/spdk/crc16.h 00:03:11.986 TEST_HEADER include/spdk/crc32.h 00:03:11.986 TEST_HEADER include/spdk/crc64.h 00:03:11.986 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:11.986 TEST_HEADER include/spdk/dif.h 00:03:11.986 TEST_HEADER include/spdk/dma.h 00:03:11.986 TEST_HEADER include/spdk/endian.h 00:03:11.986 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.986 TEST_HEADER include/spdk/env.h 00:03:11.986 TEST_HEADER include/spdk/event.h 00:03:11.986 TEST_HEADER include/spdk/fd_group.h 00:03:11.986 TEST_HEADER include/spdk/fd.h 00:03:11.986 TEST_HEADER include/spdk/file.h 00:03:11.986 TEST_HEADER include/spdk/ftl.h 00:03:11.986 CC examples/util/zipf/zipf.o 00:03:11.986 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.986 CC app/nvmf_tgt/nvmf_main.o 00:03:11.986 TEST_HEADER include/spdk/hexlify.h 00:03:11.987 TEST_HEADER include/spdk/histogram_data.h 00:03:11.987 CC examples/ioat/perf/perf.o 00:03:11.987 TEST_HEADER include/spdk/idxd.h 00:03:11.987 CC test/thread/poller_perf/poller_perf.o 00:03:11.987 TEST_HEADER include/spdk/idxd_spec.h 00:03:11.987 TEST_HEADER include/spdk/init.h 00:03:11.987 TEST_HEADER include/spdk/ioat.h 00:03:11.987 TEST_HEADER include/spdk/ioat_spec.h 00:03:11.987 TEST_HEADER include/spdk/iscsi_spec.h 00:03:11.987 TEST_HEADER include/spdk/json.h 00:03:11.987 TEST_HEADER include/spdk/jsonrpc.h 00:03:12.244 TEST_HEADER include/spdk/keyring.h 00:03:12.244 TEST_HEADER include/spdk/keyring_module.h 00:03:12.244 TEST_HEADER include/spdk/likely.h 00:03:12.244 TEST_HEADER include/spdk/log.h 00:03:12.244 CC test/app/bdev_svc/bdev_svc.o 00:03:12.244 TEST_HEADER include/spdk/lvol.h 00:03:12.244 TEST_HEADER include/spdk/memory.h 00:03:12.244 TEST_HEADER include/spdk/mmio.h 00:03:12.244 TEST_HEADER include/spdk/nbd.h 00:03:12.244 TEST_HEADER include/spdk/notify.h 00:03:12.244 TEST_HEADER include/spdk/nvme.h 00:03:12.244 TEST_HEADER include/spdk/nvme_intel.h 00:03:12.244 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:12.244 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:12.244 TEST_HEADER include/spdk/nvme_spec.h 00:03:12.244 TEST_HEADER include/spdk/nvme_zns.h 00:03:12.244 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:12.244 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:12.244 TEST_HEADER include/spdk/nvmf.h 00:03:12.244 TEST_HEADER include/spdk/nvmf_spec.h 00:03:12.244 TEST_HEADER include/spdk/nvmf_transport.h 00:03:12.244 CC test/dma/test_dma/test_dma.o 00:03:12.244 TEST_HEADER include/spdk/opal.h 00:03:12.244 TEST_HEADER include/spdk/opal_spec.h 00:03:12.244 TEST_HEADER include/spdk/pci_ids.h 00:03:12.244 TEST_HEADER include/spdk/pipe.h 00:03:12.244 TEST_HEADER include/spdk/queue.h 00:03:12.244 TEST_HEADER include/spdk/reduce.h 00:03:12.244 TEST_HEADER include/spdk/rpc.h 00:03:12.244 TEST_HEADER include/spdk/scheduler.h 00:03:12.244 TEST_HEADER include/spdk/scsi.h 00:03:12.244 TEST_HEADER include/spdk/scsi_spec.h 00:03:12.244 TEST_HEADER include/spdk/sock.h 00:03:12.244 TEST_HEADER include/spdk/stdinc.h 00:03:12.244 TEST_HEADER include/spdk/string.h 00:03:12.244 TEST_HEADER include/spdk/thread.h 00:03:12.244 TEST_HEADER include/spdk/trace.h 00:03:12.244 TEST_HEADER include/spdk/trace_parser.h 00:03:12.244 TEST_HEADER include/spdk/tree.h 00:03:12.244 TEST_HEADER include/spdk/ublk.h 00:03:12.244 TEST_HEADER include/spdk/util.h 00:03:12.244 TEST_HEADER include/spdk/uuid.h 00:03:12.244 TEST_HEADER include/spdk/version.h 00:03:12.244 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.244 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.244 TEST_HEADER include/spdk/vhost.h 00:03:12.244 TEST_HEADER include/spdk/vmd.h 00:03:12.244 TEST_HEADER include/spdk/xor.h 00:03:12.244 LINK interrupt_tgt 00:03:12.244 TEST_HEADER include/spdk/zipf.h 00:03:12.244 CXX test/cpp_headers/accel.o 00:03:12.244 LINK zipf 00:03:12.244 LINK poller_perf 00:03:12.244 LINK spdk_trace_record 00:03:12.244 LINK nvmf_tgt 00:03:12.244 LINK ioat_perf 00:03:12.500 LINK bdev_svc 00:03:12.500 CXX test/cpp_headers/accel_module.o 00:03:12.500 LINK spdk_trace 00:03:12.500 CXX test/cpp_headers/assert.o 00:03:12.500 CXX test/cpp_headers/barrier.o 00:03:12.500 CXX test/cpp_headers/base64.o 00:03:12.500 LINK test_dma 00:03:12.757 CC examples/ioat/verify/verify.o 00:03:12.757 CXX test/cpp_headers/bdev.o 00:03:12.757 CC examples/thread/thread/thread_ex.o 00:03:12.757 CC test/app/histogram_perf/histogram_perf.o 00:03:12.757 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.757 CC examples/sock/hello_world/hello_sock.o 00:03:12.757 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.757 CC examples/idxd/perf/perf.o 00:03:12.757 CXX test/cpp_headers/bdev_module.o 00:03:12.757 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.014 LINK verify 00:03:13.014 CC test/app/jsoncat/jsoncat.o 00:03:13.014 LINK histogram_perf 00:03:13.014 LINK lsvmd 00:03:13.014 LINK iscsi_tgt 00:03:13.014 LINK thread 00:03:13.014 LINK hello_sock 00:03:13.014 LINK jsoncat 00:03:13.014 CXX test/cpp_headers/bdev_zone.o 00:03:13.271 CC test/app/stub/stub.o 00:03:13.271 LINK idxd_perf 00:03:13.271 CC examples/vmd/led/led.o 00:03:13.271 LINK nvme_fuzz 00:03:13.271 LINK stub 00:03:13.271 CXX test/cpp_headers/bit_array.o 00:03:13.542 CC test/env/vtophys/vtophys.o 00:03:13.542 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.542 CC app/spdk_lspci/spdk_lspci.o 00:03:13.542 CC test/env/mem_callbacks/mem_callbacks.o 00:03:13.542 CC app/spdk_tgt/spdk_tgt.o 00:03:13.542 LINK led 00:03:13.542 CXX test/cpp_headers/bit_pool.o 00:03:13.542 LINK spdk_lspci 00:03:13.542 LINK vtophys 00:03:13.542 LINK env_dpdk_post_init 00:03:13.542 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.542 CC app/spdk_nvme_perf/perf.o 00:03:13.811 CC examples/nvme/hello_world/hello_world.o 00:03:13.811 CXX test/cpp_headers/blob_bdev.o 00:03:13.811 LINK spdk_tgt 00:03:13.811 CC test/env/memory/memory_ut.o 00:03:14.068 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.068 LINK hello_world 00:03:14.068 CC examples/accel/perf/accel_perf.o 00:03:14.068 CC test/nvme/aer/aer.o 00:03:14.068 CC test/event/event_perf/event_perf.o 00:03:14.068 LINK mem_callbacks 00:03:14.068 CXX test/cpp_headers/blobfs.o 00:03:14.068 CC test/event/reactor/reactor.o 00:03:14.327 LINK event_perf 00:03:14.327 CC examples/nvme/reconnect/reconnect.o 00:03:14.327 LINK reactor 00:03:14.327 CXX test/cpp_headers/blob.o 00:03:14.327 LINK aer 00:03:14.327 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.327 LINK accel_perf 00:03:14.585 CC examples/nvme/arbitration/arbitration.o 00:03:14.585 CXX test/cpp_headers/conf.o 00:03:14.585 LINK spdk_nvme_perf 00:03:14.585 CC test/event/reactor_perf/reactor_perf.o 00:03:14.585 LINK reconnect 00:03:14.585 CC test/nvme/reset/reset.o 00:03:14.585 CXX test/cpp_headers/config.o 00:03:14.842 CXX test/cpp_headers/cpuset.o 00:03:14.842 CC examples/nvme/hotplug/hotplug.o 00:03:14.842 LINK reactor_perf 00:03:14.842 CC app/spdk_nvme_identify/identify.o 00:03:14.842 LINK arbitration 00:03:14.842 LINK reset 00:03:14.842 CXX test/cpp_headers/crc16.o 00:03:14.842 LINK nvme_manage 00:03:14.842 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.099 LINK hotplug 00:03:15.099 CC test/event/app_repeat/app_repeat.o 00:03:15.099 LINK memory_ut 00:03:15.099 LINK spdk_nvme_discover 00:03:15.099 CXX test/cpp_headers/crc32.o 00:03:15.099 CC test/nvme/sgl/sgl.o 00:03:15.099 CC test/nvme/e2edp/nvme_dp.o 00:03:15.099 LINK app_repeat 00:03:15.356 CC test/rpc_client/rpc_client_test.o 00:03:15.356 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.356 LINK iscsi_fuzz 00:03:15.356 CXX test/cpp_headers/crc64.o 00:03:15.356 CC test/env/pci/pci_ut.o 00:03:15.356 LINK rpc_client_test 00:03:15.356 LINK cmb_copy 00:03:15.614 LINK nvme_dp 00:03:15.614 CXX test/cpp_headers/dif.o 00:03:15.614 CC test/event/scheduler/scheduler.o 00:03:15.614 CC test/accel/dif/dif.o 00:03:15.614 LINK sgl 00:03:15.614 LINK spdk_nvme_identify 00:03:15.614 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.614 CXX test/cpp_headers/dma.o 00:03:15.872 CC examples/nvme/abort/abort.o 00:03:15.872 LINK pci_ut 00:03:15.872 LINK scheduler 00:03:15.872 CC test/blobfs/mkfs/mkfs.o 00:03:15.872 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.872 CXX test/cpp_headers/endian.o 00:03:15.872 CC test/nvme/overhead/overhead.o 00:03:15.872 CC test/lvol/esnap/esnap.o 00:03:15.872 CC app/spdk_top/spdk_top.o 00:03:16.130 LINK dif 00:03:16.130 LINK mkfs 00:03:16.130 CXX test/cpp_headers/env_dpdk.o 00:03:16.130 LINK abort 00:03:16.130 CC app/vhost/vhost.o 00:03:16.130 CC app/spdk_dd/spdk_dd.o 00:03:16.130 LINK overhead 00:03:16.130 LINK vhost_fuzz 00:03:16.130 CXX test/cpp_headers/env.o 00:03:16.388 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.388 LINK vhost 00:03:16.388 CXX test/cpp_headers/event.o 00:03:16.388 CC test/nvme/err_injection/err_injection.o 00:03:16.388 CC test/bdev/bdevio/bdevio.o 00:03:16.645 CC examples/blob/hello_world/hello_blob.o 00:03:16.645 LINK pmr_persistence 00:03:16.645 CC examples/blob/cli/blobcli.o 00:03:16.645 CXX test/cpp_headers/fd_group.o 00:03:16.645 LINK spdk_dd 00:03:16.645 LINK err_injection 00:03:16.645 CC test/nvme/startup/startup.o 00:03:16.645 CXX test/cpp_headers/fd.o 00:03:16.645 CC test/nvme/reserve/reserve.o 00:03:16.645 LINK hello_blob 00:03:16.902 CXX test/cpp_headers/file.o 00:03:16.902 LINK spdk_top 00:03:16.902 CXX test/cpp_headers/ftl.o 00:03:16.902 LINK bdevio 00:03:16.902 LINK startup 00:03:16.902 LINK reserve 00:03:16.902 CC test/nvme/simple_copy/simple_copy.o 00:03:17.158 LINK blobcli 00:03:17.159 CXX test/cpp_headers/gpt_spec.o 00:03:17.159 CC test/nvme/connect_stress/connect_stress.o 00:03:17.159 CC test/nvme/boot_partition/boot_partition.o 00:03:17.159 CC test/nvme/compliance/nvme_compliance.o 00:03:17.159 CC app/fio/nvme/fio_plugin.o 00:03:17.159 CC test/nvme/fused_ordering/fused_ordering.o 00:03:17.159 CXX test/cpp_headers/hexlify.o 00:03:17.159 LINK simple_copy 00:03:17.159 LINK connect_stress 00:03:17.415 LINK boot_partition 00:03:17.415 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:17.415 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.415 CXX test/cpp_headers/histogram_data.o 00:03:17.415 LINK fused_ordering 00:03:17.415 LINK nvme_compliance 00:03:17.415 CC test/nvme/fdp/fdp.o 00:03:17.415 LINK doorbell_aers 00:03:17.415 CC test/nvme/cuse/cuse.o 00:03:17.672 CXX test/cpp_headers/idxd.o 00:03:17.672 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.672 LINK hello_bdev 00:03:17.672 CXX test/cpp_headers/idxd_spec.o 00:03:17.672 CC app/fio/bdev/fio_plugin.o 00:03:17.672 CXX test/cpp_headers/init.o 00:03:17.672 CXX test/cpp_headers/ioat.o 00:03:17.672 LINK spdk_nvme 00:03:17.672 CXX test/cpp_headers/ioat_spec.o 00:03:17.672 CXX test/cpp_headers/iscsi_spec.o 00:03:17.929 LINK fdp 00:03:17.929 CXX test/cpp_headers/json.o 00:03:17.929 CXX test/cpp_headers/jsonrpc.o 00:03:17.929 CXX test/cpp_headers/keyring.o 00:03:17.929 CXX test/cpp_headers/keyring_module.o 00:03:17.929 CXX test/cpp_headers/likely.o 00:03:17.929 CXX test/cpp_headers/log.o 00:03:17.929 CXX test/cpp_headers/lvol.o 00:03:17.929 CXX test/cpp_headers/memory.o 00:03:18.187 CXX test/cpp_headers/mmio.o 00:03:18.187 CXX test/cpp_headers/nbd.o 00:03:18.187 CXX test/cpp_headers/notify.o 00:03:18.187 CXX test/cpp_headers/nvme.o 00:03:18.187 CXX test/cpp_headers/nvme_intel.o 00:03:18.187 LINK spdk_bdev 00:03:18.187 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.187 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.187 CXX test/cpp_headers/nvme_spec.o 00:03:18.187 CXX test/cpp_headers/nvme_zns.o 00:03:18.187 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.444 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.444 LINK bdevperf 00:03:18.444 CXX test/cpp_headers/nvmf.o 00:03:18.444 CXX test/cpp_headers/nvmf_spec.o 00:03:18.444 CXX test/cpp_headers/nvmf_transport.o 00:03:18.444 CXX test/cpp_headers/opal.o 00:03:18.444 CXX test/cpp_headers/opal_spec.o 00:03:18.444 CXX test/cpp_headers/pci_ids.o 00:03:18.444 CXX test/cpp_headers/pipe.o 00:03:18.444 CXX test/cpp_headers/queue.o 00:03:18.444 CXX test/cpp_headers/reduce.o 00:03:18.444 CXX test/cpp_headers/rpc.o 00:03:18.701 CXX test/cpp_headers/scheduler.o 00:03:18.701 CXX test/cpp_headers/scsi.o 00:03:18.701 CXX test/cpp_headers/scsi_spec.o 00:03:18.701 CXX test/cpp_headers/sock.o 00:03:18.701 CXX test/cpp_headers/stdinc.o 00:03:18.701 CXX test/cpp_headers/string.o 00:03:18.701 CC examples/nvmf/nvmf/nvmf.o 00:03:18.701 CXX test/cpp_headers/thread.o 00:03:18.701 CXX test/cpp_headers/trace.o 00:03:18.701 CXX test/cpp_headers/trace_parser.o 00:03:18.701 CXX test/cpp_headers/tree.o 00:03:18.958 CXX test/cpp_headers/ublk.o 00:03:18.958 CXX test/cpp_headers/util.o 00:03:18.958 CXX test/cpp_headers/uuid.o 00:03:18.958 LINK cuse 00:03:18.958 CXX test/cpp_headers/version.o 00:03:18.958 CXX test/cpp_headers/vfio_user_pci.o 00:03:18.958 CXX test/cpp_headers/vfio_user_spec.o 00:03:18.958 CXX test/cpp_headers/vhost.o 00:03:18.958 CXX test/cpp_headers/vmd.o 00:03:18.958 CXX test/cpp_headers/xor.o 00:03:18.958 CXX test/cpp_headers/zipf.o 00:03:18.958 LINK nvmf 00:03:20.856 LINK esnap 00:03:21.114 00:03:21.114 real 1m6.422s 00:03:21.114 user 6m58.651s 00:03:21.114 sys 1m31.532s 00:03:21.114 07:07:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:21.114 07:07:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.114 ************************************ 00:03:21.114 END TEST make 00:03:21.114 ************************************ 00:03:21.114 07:07:30 -- common/autotest_common.sh@1142 -- $ return 0 00:03:21.114 07:07:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.114 07:07:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.114 07:07:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.114 07:07:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.114 07:07:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.372 07:07:30 -- pm/common@44 -- $ pid=5186 00:03:21.372 07:07:30 -- pm/common@50 -- $ kill -TERM 5186 00:03:21.372 07:07:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.372 07:07:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.372 07:07:30 -- pm/common@44 -- $ pid=5188 00:03:21.372 07:07:30 -- pm/common@50 -- $ kill -TERM 5188 00:03:21.372 07:07:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:21.372 07:07:30 -- nvmf/common.sh@7 -- # uname -s 00:03:21.372 07:07:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.372 07:07:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.372 07:07:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.372 07:07:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.372 07:07:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.372 07:07:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.372 07:07:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.372 07:07:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.372 07:07:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.372 07:07:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.372 07:07:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:03:21.372 07:07:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:03:21.373 07:07:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.373 07:07:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.373 07:07:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:21.373 07:07:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.373 07:07:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:21.373 07:07:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.373 07:07:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.373 07:07:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.373 07:07:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.373 07:07:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.373 07:07:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.373 07:07:30 -- paths/export.sh@5 -- # export PATH 00:03:21.373 07:07:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.373 07:07:30 -- nvmf/common.sh@47 -- # : 0 00:03:21.373 07:07:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:21.373 07:07:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:21.373 07:07:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.373 07:07:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.373 07:07:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.373 07:07:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:21.373 07:07:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:21.373 07:07:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:21.373 07:07:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.373 07:07:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.373 07:07:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.373 07:07:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.373 07:07:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.373 07:07:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.373 07:07:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.373 07:07:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.373 07:07:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.373 07:07:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.373 07:07:30 -- spdk/autotest.sh@48 -- # udevadm_pid=52838 00:03:21.373 07:07:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.373 07:07:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.373 07:07:30 -- pm/common@17 -- # local monitor 00:03:21.373 07:07:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.373 07:07:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.373 07:07:30 -- pm/common@25 -- # sleep 1 00:03:21.373 07:07:30 -- pm/common@21 -- # date +%s 00:03:21.373 07:07:30 -- pm/common@21 -- # date +%s 00:03:21.373 07:07:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721027250 00:03:21.373 07:07:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721027250 00:03:21.373 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721027250_collect-vmstat.pm.log 00:03:21.373 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721027250_collect-cpu-load.pm.log 00:03:22.309 07:07:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.309 07:07:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.309 07:07:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.309 07:07:31 -- common/autotest_common.sh@10 -- # set +x 00:03:22.309 07:07:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.309 07:07:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:22.309 07:07:31 -- common/autotest_common.sh@10 -- # set +x 00:03:22.568 07:07:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.568 07:07:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.568 07:07:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.568 07:07:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.568 07:07:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.568 07:07:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.568 07:07:31 -- common/autotest_common.sh@1455 -- # uname 00:03:22.568 07:07:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:22.568 07:07:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.568 07:07:31 -- common/autotest_common.sh@1475 -- # uname 00:03:22.568 07:07:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:22.568 07:07:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:22.568 07:07:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:22.568 07:07:31 -- spdk/autotest.sh@72 -- # hash lcov 00:03:22.568 07:07:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:22.568 07:07:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:22.568 --rc lcov_branch_coverage=1 00:03:22.568 --rc lcov_function_coverage=1 00:03:22.568 --rc genhtml_branch_coverage=1 00:03:22.568 --rc genhtml_function_coverage=1 00:03:22.568 --rc genhtml_legend=1 00:03:22.568 --rc geninfo_all_blocks=1 00:03:22.568 ' 00:03:22.568 07:07:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:22.568 --rc lcov_branch_coverage=1 00:03:22.568 --rc lcov_function_coverage=1 00:03:22.568 --rc genhtml_branch_coverage=1 00:03:22.568 --rc genhtml_function_coverage=1 00:03:22.568 --rc genhtml_legend=1 00:03:22.568 --rc geninfo_all_blocks=1 00:03:22.568 ' 00:03:22.568 07:07:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:22.568 --rc lcov_branch_coverage=1 00:03:22.568 --rc lcov_function_coverage=1 00:03:22.568 --rc genhtml_branch_coverage=1 00:03:22.568 --rc genhtml_function_coverage=1 00:03:22.568 --rc genhtml_legend=1 00:03:22.568 --rc geninfo_all_blocks=1 00:03:22.568 --no-external' 00:03:22.568 07:07:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:22.568 --rc lcov_branch_coverage=1 00:03:22.568 --rc lcov_function_coverage=1 00:03:22.568 --rc genhtml_branch_coverage=1 00:03:22.568 --rc genhtml_function_coverage=1 00:03:22.568 --rc genhtml_legend=1 00:03:22.568 --rc geninfo_all_blocks=1 00:03:22.568 --no-external' 00:03:22.568 07:07:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:22.568 lcov: LCOV version 1.14 00:03:22.568 07:07:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:37.440 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:37.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:52.321 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:52.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:52.322 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:52.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:53.695 07:08:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:53.695 07:08:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.695 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:03:53.695 07:08:02 -- spdk/autotest.sh@91 -- # rm -f 00:03:53.695 07:08:02 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.630 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:54.630 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:54.630 07:08:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:54.630 07:08:03 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:54.630 07:08:03 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:54.630 07:08:03 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:54.630 07:08:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.630 07:08:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:54.630 07:08:03 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:54.630 07:08:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.630 07:08:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.630 07:08:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.630 07:08:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:54.630 07:08:03 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:54.630 07:08:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:54.630 07:08:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.630 07:08:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.630 07:08:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:54.630 07:08:03 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:54.630 07:08:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:54.630 07:08:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.630 07:08:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.630 07:08:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:54.630 07:08:03 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:54.630 07:08:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:54.630 07:08:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.630 07:08:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:54.630 07:08:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.630 07:08:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.630 07:08:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:54.630 07:08:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:54.630 07:08:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.630 No valid GPT data, bailing 00:03:54.630 07:08:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.630 07:08:03 -- scripts/common.sh@391 -- # pt= 00:03:54.630 07:08:03 -- scripts/common.sh@392 -- # return 1 00:03:54.630 07:08:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.630 1+0 records in 00:03:54.630 1+0 records out 00:03:54.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00322287 s, 325 MB/s 00:03:54.630 07:08:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.630 07:08:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.630 07:08:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:54.630 07:08:03 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:54.630 07:08:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:54.630 No valid GPT data, bailing 00:03:54.630 07:08:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:54.630 07:08:03 -- scripts/common.sh@391 -- # pt= 00:03:54.630 07:08:03 -- scripts/common.sh@392 -- # return 1 00:03:54.630 07:08:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:54.630 1+0 records in 00:03:54.630 1+0 records out 00:03:54.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457636 s, 229 MB/s 00:03:54.630 07:08:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.630 07:08:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.630 07:08:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:54.630 07:08:03 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:54.630 07:08:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:54.630 No valid GPT data, bailing 00:03:54.630 07:08:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:54.630 07:08:03 -- scripts/common.sh@391 -- # pt= 00:03:54.630 07:08:03 -- scripts/common.sh@392 -- # return 1 00:03:54.630 07:08:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:54.630 1+0 records in 00:03:54.630 1+0 records out 00:03:54.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417564 s, 251 MB/s 00:03:54.630 07:08:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.630 07:08:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.630 07:08:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:54.630 07:08:03 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:54.630 07:08:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:54.890 No valid GPT data, bailing 00:03:54.890 07:08:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:54.890 07:08:03 -- scripts/common.sh@391 -- # pt= 00:03:54.890 07:08:03 -- scripts/common.sh@392 -- # return 1 00:03:54.890 07:08:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:54.890 1+0 records in 00:03:54.890 1+0 records out 00:03:54.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465956 s, 225 MB/s 00:03:54.890 07:08:03 -- spdk/autotest.sh@118 -- # sync 00:03:54.890 07:08:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.890 07:08:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.890 07:08:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.787 07:08:05 -- spdk/autotest.sh@124 -- # uname -s 00:03:56.787 07:08:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:56.787 07:08:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.787 07:08:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.787 07:08:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.787 07:08:05 -- common/autotest_common.sh@10 -- # set +x 00:03:56.787 ************************************ 00:03:56.787 START TEST setup.sh 00:03:56.787 ************************************ 00:03:56.788 07:08:05 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.788 * Looking for test storage... 00:03:56.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.788 07:08:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:56.788 07:08:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:56.788 07:08:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.788 07:08:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.788 07:08:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.788 07:08:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.788 ************************************ 00:03:56.788 START TEST acl 00:03:56.788 ************************************ 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.788 * Looking for test storage... 00:03:56.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.788 07:08:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:56.788 07:08:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.788 07:08:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:56.788 07:08:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:56.788 07:08:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:56.788 07:08:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:56.788 07:08:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:56.788 07:08:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.788 07:08:05 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.720 07:08:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:57.720 07:08:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:57.720 07:08:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.720 07:08:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:57.720 07:08:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.720 07:08:06 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.286 Hugepages 00:03:58.286 node hugesize free / total 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.286 00:03:58.286 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.286 07:08:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.543 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:58.544 07:08:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.544 07:08:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.544 07:08:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.544 07:08:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.544 ************************************ 00:03:58.544 START TEST denied 00:03:58.544 ************************************ 00:03:58.544 07:08:07 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:58.544 07:08:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:58.544 07:08:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:58.544 07:08:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.544 07:08:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.544 07:08:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:59.477 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.477 07:08:08 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.043 00:04:00.043 real 0m1.413s 00:04:00.043 user 0m0.561s 00:04:00.043 sys 0m0.810s 00:04:00.043 07:08:08 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.043 07:08:08 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:00.043 ************************************ 00:04:00.043 END TEST denied 00:04:00.043 ************************************ 00:04:00.043 07:08:08 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:00.043 07:08:08 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.043 07:08:08 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.043 07:08:08 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.043 07:08:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:00.043 ************************************ 00:04:00.043 START TEST allowed 00:04:00.043 ************************************ 00:04:00.043 07:08:08 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:00.043 07:08:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:00.043 07:08:08 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:00.043 07:08:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:00.043 07:08:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.043 07:08:08 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.979 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.979 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:00.979 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:00.979 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.979 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:00.980 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:00.980 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.980 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.980 07:08:09 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:00.980 07:08:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.980 07:08:09 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.547 00:04:01.547 real 0m1.536s 00:04:01.547 user 0m0.667s 00:04:01.547 sys 0m0.860s 00:04:01.547 07:08:10 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.547 07:08:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:01.547 ************************************ 00:04:01.547 END TEST allowed 00:04:01.547 ************************************ 00:04:01.547 07:08:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:01.547 00:04:01.547 real 0m4.743s 00:04:01.547 user 0m2.105s 00:04:01.547 sys 0m2.596s 00:04:01.547 07:08:10 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.547 07:08:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.547 ************************************ 00:04:01.547 END TEST acl 00:04:01.547 ************************************ 00:04:01.547 07:08:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:01.547 07:08:10 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.547 07:08:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.547 07:08:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.547 07:08:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.547 ************************************ 00:04:01.548 START TEST hugepages 00:04:01.548 ************************************ 00:04:01.548 07:08:10 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.806 * Looking for test storage... 00:04:01.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6050440 kB' 'MemAvailable: 7430800 kB' 'Buffers: 2436 kB' 'Cached: 1594616 kB' 'SwapCached: 0 kB' 'Active: 436028 kB' 'Inactive: 1265704 kB' 'Active(anon): 115168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 106652 kB' 'Mapped: 48768 kB' 'Shmem: 10488 kB' 'KReclaimable: 61468 kB' 'Slab: 133676 kB' 'SReclaimable: 61468 kB' 'SUnreclaim: 72208 kB' 'KernelStack: 6344 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 336788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.806 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.807 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.808 07:08:10 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:01.808 07:08:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.808 07:08:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.808 07:08:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.808 ************************************ 00:04:01.808 START TEST default_setup 00:04:01.808 ************************************ 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.808 07:08:10 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.372 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.631 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123452 kB' 'MemAvailable: 9503672 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 453100 kB' 'Inactive: 1265712 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123164 kB' 'Mapped: 48992 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133304 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72128 kB' 'KernelStack: 6400 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.631 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123452 kB' 'MemAvailable: 9503672 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452828 kB' 'Inactive: 1265712 kB' 'Active(anon): 131968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122892 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133300 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72124 kB' 'KernelStack: 6336 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.632 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.633 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123452 kB' 'MemAvailable: 9503672 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452652 kB' 'Inactive: 1265712 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122904 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133300 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72124 kB' 'KernelStack: 6304 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.634 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.635 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:02.636 nr_hugepages=1024 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.636 resv_hugepages=0 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.636 surplus_hugepages=0 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.636 anon_hugepages=0 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123452 kB' 'MemAvailable: 9503672 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452560 kB' 'Inactive: 1265712 kB' 'Active(anon): 131700 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122780 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133300 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72124 kB' 'KernelStack: 6272 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.636 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.637 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8122952 kB' 'MemUsed: 4119020 kB' 'SwapCached: 0 kB' 'Active: 452820 kB' 'Inactive: 1265712 kB' 'Active(anon): 131960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1597040 kB' 'Mapped: 48660 kB' 'AnonPages: 122780 kB' 'Shmem: 10464 kB' 'KernelStack: 6340 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61176 kB' 'Slab: 133300 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.638 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.639 node0=1024 expecting 1024 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.639 00:04:02.639 real 0m0.986s 00:04:02.639 user 0m0.460s 00:04:02.639 sys 0m0.465s 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.639 07:08:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:02.639 ************************************ 00:04:02.639 END TEST default_setup 00:04:02.639 ************************************ 00:04:02.896 07:08:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.896 07:08:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:02.896 07:08:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.896 07:08:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.896 07:08:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.896 ************************************ 00:04:02.896 START TEST per_node_1G_alloc 00:04:02.896 ************************************ 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:02.896 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:02.897 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:02.897 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.897 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.178 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.178 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9173380 kB' 'MemAvailable: 10553604 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 453076 kB' 'Inactive: 1265716 kB' 'Active(anon): 132216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123324 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133284 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72108 kB' 'KernelStack: 6292 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.178 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.179 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9173380 kB' 'MemAvailable: 10553604 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452692 kB' 'Inactive: 1265716 kB' 'Active(anon): 131832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122904 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133288 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72112 kB' 'KernelStack: 6320 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.180 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.181 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9173380 kB' 'MemAvailable: 10553604 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452700 kB' 'Inactive: 1265716 kB' 'Active(anon): 131840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133284 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72108 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.182 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.183 nr_hugepages=512 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:03.183 resv_hugepages=0 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.183 surplus_hugepages=0 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.183 anon_hugepages=0 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.183 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9173380 kB' 'MemAvailable: 10553604 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1265716 kB' 'Active(anon): 131844 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133276 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.184 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.185 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9175732 kB' 'MemUsed: 3066240 kB' 'SwapCached: 0 kB' 'Active: 452504 kB' 'Inactive: 1265716 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1597040 kB' 'Mapped: 48660 kB' 'AnonPages: 122832 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61176 kB' 'Slab: 133272 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.443 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.444 node0=512 expecting 512 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.444 00:04:03.444 real 0m0.527s 00:04:03.444 user 0m0.254s 00:04:03.444 sys 0m0.307s 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.444 07:08:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.444 ************************************ 00:04:03.444 END TEST per_node_1G_alloc 00:04:03.444 ************************************ 00:04:03.444 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.444 07:08:12 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.444 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.444 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.444 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.444 ************************************ 00:04:03.444 START TEST even_2G_alloc 00:04:03.444 ************************************ 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.444 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.705 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.705 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.705 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123516 kB' 'MemAvailable: 9503740 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 453088 kB' 'Inactive: 1265716 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123336 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133280 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6292 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.706 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123268 kB' 'MemAvailable: 9503492 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452744 kB' 'Inactive: 1265716 kB' 'Active(anon): 131884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133280 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6336 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.707 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.708 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123268 kB' 'MemAvailable: 9503492 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452708 kB' 'Inactive: 1265716 kB' 'Active(anon): 131848 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122996 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133268 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72092 kB' 'KernelStack: 6320 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.709 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.710 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.711 nr_hugepages=1024 00:04:03.711 resv_hugepages=0 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.711 surplus_hugepages=0 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.711 anon_hugepages=0 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.711 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.970 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.970 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123268 kB' 'MemAvailable: 9503492 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452620 kB' 'Inactive: 1265716 kB' 'Active(anon): 131760 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122920 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133264 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72088 kB' 'KernelStack: 6320 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.971 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.972 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123268 kB' 'MemUsed: 4118704 kB' 'SwapCached: 0 kB' 'Active: 452696 kB' 'Inactive: 1265716 kB' 'Active(anon): 131836 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1597040 kB' 'Mapped: 48660 kB' 'AnonPages: 123032 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61176 kB' 'Slab: 133264 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.973 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.974 node0=1024 expecting 1024 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.974 00:04:03.974 real 0m0.514s 00:04:03.974 user 0m0.285s 00:04:03.974 sys 0m0.263s 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.974 07:08:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.974 ************************************ 00:04:03.974 END TEST even_2G_alloc 00:04:03.974 ************************************ 00:04:03.974 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.974 07:08:12 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:03.974 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.974 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.974 07:08:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.974 ************************************ 00:04:03.974 START TEST odd_alloc 00:04:03.974 ************************************ 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.974 07:08:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.233 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.234 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8121212 kB' 'MemAvailable: 9501436 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 453320 kB' 'Inactive: 1265716 kB' 'Active(anon): 132460 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123636 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133276 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6328 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 354604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.234 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.235 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8121588 kB' 'MemAvailable: 9501812 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452424 kB' 'Inactive: 1265716 kB' 'Active(anon): 131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122732 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133284 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72108 kB' 'KernelStack: 6340 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.498 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8121588 kB' 'MemAvailable: 9501812 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452480 kB' 'Inactive: 1265716 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133284 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72108 kB' 'KernelStack: 6356 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.499 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.500 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:04.501 nr_hugepages=1025 00:04:04.501 resv_hugepages=0 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.501 surplus_hugepages=0 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.501 anon_hugepages=0 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8121588 kB' 'MemAvailable: 9501812 kB' 'Buffers: 2436 kB' 'Cached: 1594604 kB' 'SwapCached: 0 kB' 'Active: 452440 kB' 'Inactive: 1265716 kB' 'Active(anon): 131580 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133284 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72108 kB' 'KernelStack: 6356 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.501 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.502 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.503 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8121588 kB' 'MemUsed: 4120384 kB' 'SwapCached: 0 kB' 'Active: 452736 kB' 'Inactive: 1265716 kB' 'Active(anon): 131876 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1597040 kB' 'Mapped: 48800 kB' 'AnonPages: 123016 kB' 'Shmem: 10464 kB' 'KernelStack: 6324 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61176 kB' 'Slab: 133280 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.504 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.505 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.506 node0=1025 expecting 1025 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:04.506 00:04:04.506 real 0m0.547s 00:04:04.506 user 0m0.264s 00:04:04.506 sys 0m0.295s 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.506 07:08:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.506 ************************************ 00:04:04.506 END TEST odd_alloc 00:04:04.506 ************************************ 00:04:04.506 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:04.506 07:08:13 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:04.506 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.506 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.506 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.506 ************************************ 00:04:04.506 START TEST custom_alloc 00:04:04.506 ************************************ 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.506 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:04.507 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.507 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:04.507 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:04.507 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:04.507 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:04.507 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.507 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.764 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.764 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.764 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9181060 kB' 'MemAvailable: 10561288 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452968 kB' 'Inactive: 1265720 kB' 'Active(anon): 132108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123256 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133308 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72132 kB' 'KernelStack: 6352 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.024 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.025 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9181060 kB' 'MemAvailable: 10561288 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1265720 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122776 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133308 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72132 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.026 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9180808 kB' 'MemAvailable: 10561036 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452480 kB' 'Inactive: 1265720 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123028 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133304 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72128 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.027 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.028 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.029 nr_hugepages=512 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.029 resv_hugepages=0 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.029 surplus_hugepages=0 00:04:05.029 anon_hugepages=0 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9180808 kB' 'MemAvailable: 10561036 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452728 kB' 'Inactive: 1265720 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122960 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133304 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72128 kB' 'KernelStack: 6320 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.029 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.030 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9180808 kB' 'MemUsed: 3061164 kB' 'SwapCached: 0 kB' 'Active: 452668 kB' 'Inactive: 1265720 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1597044 kB' 'Mapped: 48660 kB' 'AnonPages: 122936 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61176 kB' 'Slab: 133300 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.031 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.032 node0=512 expecting 512 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.032 00:04:05.032 real 0m0.553s 00:04:05.032 user 0m0.275s 00:04:05.032 sys 0m0.276s 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.032 ************************************ 00:04:05.032 END TEST custom_alloc 00:04:05.032 ************************************ 00:04:05.032 07:08:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.032 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.032 07:08:13 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:05.032 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.032 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.032 07:08:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.032 ************************************ 00:04:05.032 START TEST no_shrink_alloc 00:04:05.032 ************************************ 00:04:05.032 07:08:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.033 07:08:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.604 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.604 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8132768 kB' 'MemAvailable: 9512996 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452944 kB' 'Inactive: 1265720 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123484 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133332 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72156 kB' 'KernelStack: 6324 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.604 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.605 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8132768 kB' 'MemAvailable: 9512996 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452676 kB' 'Inactive: 1265720 kB' 'Active(anon): 131816 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122952 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133324 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72148 kB' 'KernelStack: 6320 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.606 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.607 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8132768 kB' 'MemAvailable: 9512996 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452772 kB' 'Inactive: 1265720 kB' 'Active(anon): 131912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123052 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133324 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72148 kB' 'KernelStack: 6320 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.608 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.609 nr_hugepages=1024 00:04:05.609 resv_hugepages=0 00:04:05.609 surplus_hugepages=0 00:04:05.609 anon_hugepages=0 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.609 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8132768 kB' 'MemAvailable: 9512996 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452812 kB' 'Inactive: 1265720 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123076 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133324 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72148 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8132768 kB' 'MemUsed: 4109204 kB' 'SwapCached: 0 kB' 'Active: 452844 kB' 'Inactive: 1265720 kB' 'Active(anon): 131984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1597044 kB' 'Mapped: 48664 kB' 'AnonPages: 123076 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61176 kB' 'Slab: 133320 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.614 node0=1024 expecting 1024 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.614 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.232 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.232 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.232 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8130000 kB' 'MemAvailable: 9510228 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 453272 kB' 'Inactive: 1265720 kB' 'Active(anon): 132412 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123568 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133276 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6324 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.232 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.233 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8130000 kB' 'MemAvailable: 9510228 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452728 kB' 'Inactive: 1265720 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133284 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72108 kB' 'KernelStack: 6320 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.234 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.235 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8130000 kB' 'MemAvailable: 9510228 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452780 kB' 'Inactive: 1265720 kB' 'Active(anon): 131920 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123076 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133280 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6336 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.236 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.237 nr_hugepages=1024 00:04:06.237 resv_hugepages=0 00:04:06.237 surplus_hugepages=0 00:04:06.237 anon_hugepages=0 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.237 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8130000 kB' 'MemAvailable: 9510228 kB' 'Buffers: 2436 kB' 'Cached: 1594608 kB' 'SwapCached: 0 kB' 'Active: 452728 kB' 'Inactive: 1265720 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61176 kB' 'Slab: 133280 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6320 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.238 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.239 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8130000 kB' 'MemUsed: 4111972 kB' 'SwapCached: 0 kB' 'Active: 452896 kB' 'Inactive: 1265720 kB' 'Active(anon): 132036 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1597044 kB' 'Mapped: 48664 kB' 'AnonPages: 123040 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61176 kB' 'Slab: 133280 kB' 'SReclaimable: 61176 kB' 'SUnreclaim: 72104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.240 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.241 node0=1024 expecting 1024 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.241 00:04:06.241 real 0m1.142s 00:04:06.241 user 0m0.599s 00:04:06.241 sys 0m0.534s 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.241 07:08:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.241 ************************************ 00:04:06.241 END TEST no_shrink_alloc 00:04:06.241 ************************************ 00:04:06.241 07:08:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.241 07:08:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.241 ************************************ 00:04:06.241 END TEST hugepages 00:04:06.241 ************************************ 00:04:06.241 00:04:06.241 real 0m4.705s 00:04:06.241 user 0m2.299s 00:04:06.241 sys 0m2.389s 00:04:06.241 07:08:15 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.241 07:08:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.500 07:08:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.500 07:08:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:06.500 07:08:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.500 07:08:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.500 07:08:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.500 ************************************ 00:04:06.500 START TEST driver 00:04:06.500 ************************************ 00:04:06.500 07:08:15 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:06.500 * Looking for test storage... 00:04:06.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:06.500 07:08:15 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:06.500 07:08:15 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.500 07:08:15 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.086 07:08:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:07.086 07:08:15 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.086 07:08:15 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.086 07:08:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.086 ************************************ 00:04:07.086 START TEST guess_driver 00:04:07.086 ************************************ 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:07.086 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:07.086 Looking for driver=uio_pci_generic 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.086 07:08:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.651 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:07.651 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:07.651 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.651 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.651 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:07.651 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.908 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.908 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:07.908 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.908 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:07.908 07:08:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:07.908 07:08:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.908 07:08:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.475 00:04:08.475 real 0m1.403s 00:04:08.475 user 0m0.520s 00:04:08.475 sys 0m0.878s 00:04:08.475 07:08:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.475 ************************************ 00:04:08.475 END TEST guess_driver 00:04:08.475 ************************************ 00:04:08.475 07:08:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.475 07:08:17 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:08.475 ************************************ 00:04:08.475 END TEST driver 00:04:08.475 ************************************ 00:04:08.475 00:04:08.475 real 0m2.121s 00:04:08.475 user 0m0.752s 00:04:08.475 sys 0m1.404s 00:04:08.475 07:08:17 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.475 07:08:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.475 07:08:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.475 07:08:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.475 07:08:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.475 07:08:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.475 07:08:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.475 ************************************ 00:04:08.475 START TEST devices 00:04:08.475 ************************************ 00:04:08.475 07:08:17 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.733 * Looking for test storage... 00:04:08.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.733 07:08:17 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:08.733 07:08:17 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:08.733 07:08:17 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.733 07:08:17 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.299 07:08:18 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:09.299 07:08:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:09.300 07:08:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:09.300 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:09.300 07:08:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:09.300 07:08:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:09.300 No valid GPT data, bailing 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:09.558 No valid GPT data, bailing 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:09.558 No valid GPT data, bailing 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:09.558 07:08:18 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:09.558 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:09.558 No valid GPT data, bailing 00:04:09.558 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:09.817 07:08:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:09.818 07:08:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:09.818 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:09.818 07:08:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:09.818 07:08:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:09.818 07:08:18 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:09.818 07:08:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:09.818 07:08:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.818 07:08:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:09.818 07:08:18 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:09.818 07:08:18 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:09.818 07:08:18 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:09.818 07:08:18 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.818 07:08:18 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.818 07:08:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.818 ************************************ 00:04:09.818 START TEST nvme_mount 00:04:09.818 ************************************ 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.818 07:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.753 Creating new GPT entries in memory. 00:04:10.753 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.753 other utilities. 00:04:10.753 07:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.753 07:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.753 07:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.753 07:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.753 07:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:11.688 Creating new GPT entries in memory. 00:04:11.688 The operation has completed successfully. 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57041 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:11.688 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.947 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.206 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.206 07:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:12.206 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.206 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.469 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:12.469 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:12.469 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:12.469 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:12.469 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:12.469 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:12.469 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.731 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:12.731 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:12.731 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.732 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.989 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.248 07:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.506 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.765 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.765 00:04:13.765 real 0m4.000s 00:04:13.765 user 0m0.671s 00:04:13.765 sys 0m1.066s 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.765 ************************************ 00:04:13.765 END TEST nvme_mount 00:04:13.765 ************************************ 00:04:13.765 07:08:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.765 07:08:22 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:13.765 07:08:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.765 07:08:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.765 07:08:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.765 07:08:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.765 ************************************ 00:04:13.765 START TEST dm_mount 00:04:13.765 ************************************ 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.765 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.766 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.766 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:13.766 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.766 07:08:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:14.696 Creating new GPT entries in memory. 00:04:14.696 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.696 other utilities. 00:04:14.696 07:08:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.696 07:08:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.696 07:08:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.696 07:08:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.696 07:08:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:16.133 Creating new GPT entries in memory. 00:04:16.133 The operation has completed successfully. 00:04:16.133 07:08:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.133 07:08:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.133 07:08:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.133 07:08:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.133 07:08:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:17.067 The operation has completed successfully. 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57474 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:17.067 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.068 07:08:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.327 07:08:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.586 07:08:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.586 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.586 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:17.586 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:17.586 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.586 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.586 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.877 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:18.135 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.135 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.135 07:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:18.135 00:04:18.135 real 0m4.258s 00:04:18.135 user 0m0.470s 00:04:18.135 sys 0m0.736s 00:04:18.135 07:08:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.135 ************************************ 00:04:18.135 07:08:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:18.135 END TEST dm_mount 00:04:18.135 ************************************ 00:04:18.135 07:08:26 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:18.135 07:08:26 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:18.135 07:08:26 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:18.135 07:08:26 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.135 07:08:26 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.135 07:08:26 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.135 07:08:26 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.135 07:08:26 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.394 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.394 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:18.394 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.394 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.394 07:08:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:18.394 07:08:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.394 07:08:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.394 07:08:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.394 07:08:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.394 07:08:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.394 07:08:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:18.394 00:04:18.394 real 0m9.806s 00:04:18.394 user 0m1.791s 00:04:18.394 sys 0m2.409s 00:04:18.394 07:08:27 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.394 ************************************ 00:04:18.394 END TEST devices 00:04:18.394 ************************************ 00:04:18.394 07:08:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.394 07:08:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:18.394 00:04:18.394 real 0m21.662s 00:04:18.394 user 0m7.051s 00:04:18.394 sys 0m8.972s 00:04:18.394 07:08:27 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.394 07:08:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.394 ************************************ 00:04:18.394 END TEST setup.sh 00:04:18.394 ************************************ 00:04:18.394 07:08:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.394 07:08:27 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.960 Hugepages 00:04:18.960 node hugesize free / total 00:04:18.960 node0 1048576kB 0 / 0 00:04:18.960 node0 2048kB 2048 / 2048 00:04:18.960 00:04:18.960 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.218 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:19.218 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:19.218 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:19.218 07:08:28 -- spdk/autotest.sh@130 -- # uname -s 00:04:19.218 07:08:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:19.218 07:08:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:19.218 07:08:28 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.043 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.043 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.043 07:08:28 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:21.419 07:08:29 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:21.419 07:08:29 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:21.419 07:08:29 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:21.419 07:08:29 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:21.419 07:08:29 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:21.419 07:08:29 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:21.419 07:08:29 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.419 07:08:29 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:21.419 07:08:29 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:21.419 07:08:29 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:21.419 07:08:29 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:21.419 07:08:29 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.419 Waiting for block devices as requested 00:04:21.677 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.677 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.677 07:08:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:21.677 07:08:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.677 07:08:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:21.677 07:08:30 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:21.677 07:08:30 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:21.677 07:08:30 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1557 -- # continue 00:04:21.677 07:08:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:21.677 07:08:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.677 07:08:30 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.677 07:08:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:21.677 07:08:30 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:21.677 07:08:30 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:21.677 07:08:30 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:21.677 07:08:30 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:21.677 07:08:30 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:21.677 07:08:30 -- common/autotest_common.sh@1557 -- # continue 00:04:21.677 07:08:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:21.677 07:08:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.677 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.934 07:08:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:21.934 07:08:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.934 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.934 07:08:30 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.499 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.499 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.499 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.757 07:08:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:22.757 07:08:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.757 07:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:22.757 07:08:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:22.757 07:08:31 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:22.757 07:08:31 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.757 07:08:31 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:22.757 07:08:31 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:22.757 07:08:31 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:22.757 07:08:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:22.757 07:08:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:22.757 07:08:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.757 07:08:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.757 07:08:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:22.757 07:08:31 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:22.757 07:08:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:22.757 07:08:31 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:22.757 07:08:31 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.757 07:08:31 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:22.757 07:08:31 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.757 07:08:31 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:22.757 07:08:31 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.757 07:08:31 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:22.757 07:08:31 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.757 07:08:31 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:22.757 07:08:31 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:22.757 07:08:31 -- common/autotest_common.sh@1593 -- # return 0 00:04:22.757 07:08:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:22.757 07:08:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:22.757 07:08:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:22.757 07:08:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:22.757 07:08:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:22.757 07:08:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.757 07:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:22.757 07:08:31 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:22.757 07:08:31 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.757 07:08:31 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.757 07:08:31 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.757 07:08:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.757 07:08:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.757 07:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:22.757 ************************************ 00:04:22.757 START TEST env 00:04:22.757 ************************************ 00:04:22.757 07:08:31 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.757 * Looking for test storage... 00:04:22.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.757 07:08:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.757 07:08:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.757 07:08:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.757 07:08:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.757 ************************************ 00:04:22.757 START TEST env_memory 00:04:22.757 ************************************ 00:04:22.757 07:08:31 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.053 00:04:23.053 00:04:23.053 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.053 http://cunit.sourceforge.net/ 00:04:23.053 00:04:23.053 00:04:23.053 Suite: memory 00:04:23.053 Test: alloc and free memory map ...[2024-07-15 07:08:31.745959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.053 passed 00:04:23.053 Test: mem map translation ...[2024-07-15 07:08:31.776440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.053 [2024-07-15 07:08:31.776475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.053 [2024-07-15 07:08:31.776529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.053 [2024-07-15 07:08:31.776539] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.053 passed 00:04:23.054 Test: mem map registration ...[2024-07-15 07:08:31.840172] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:23.054 [2024-07-15 07:08:31.840199] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:23.054 passed 00:04:23.054 Test: mem map adjacent registrations ...passed 00:04:23.054 00:04:23.054 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.054 suites 1 1 n/a 0 0 00:04:23.054 tests 4 4 4 0 0 00:04:23.054 asserts 152 152 152 0 n/a 00:04:23.054 00:04:23.054 Elapsed time = 0.212 seconds 00:04:23.054 00:04:23.054 real 0m0.227s 00:04:23.054 user 0m0.214s 00:04:23.054 sys 0m0.010s 00:04:23.054 07:08:31 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.054 07:08:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.054 ************************************ 00:04:23.054 END TEST env_memory 00:04:23.054 ************************************ 00:04:23.054 07:08:31 env -- common/autotest_common.sh@1142 -- # return 0 00:04:23.054 07:08:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.054 07:08:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.054 07:08:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.054 07:08:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.343 ************************************ 00:04:23.343 START TEST env_vtophys 00:04:23.343 ************************************ 00:04:23.343 07:08:31 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.343 EAL: lib.eal log level changed from notice to debug 00:04:23.343 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.343 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.343 EAL: Maximum logical cores by configuration: 128 00:04:23.343 EAL: Detected CPU lcores: 10 00:04:23.343 EAL: Detected NUMA nodes: 1 00:04:23.343 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.343 EAL: Detected shared linkage of DPDK 00:04:23.343 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.343 EAL: Selected IOVA mode 'PA' 00:04:23.343 EAL: Probing VFIO support... 00:04:23.343 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.343 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.343 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.343 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.343 EAL: Setting up physically contiguous memory... 00:04:23.343 EAL: Setting maximum number of open files to 524288 00:04:23.343 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.343 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.343 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.343 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.343 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.343 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.343 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.343 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.343 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.343 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.344 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.344 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.344 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.344 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.344 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.344 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.344 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.344 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.344 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.344 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.344 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.344 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.344 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.344 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.344 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.344 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.344 EAL: Hugepages will be freed exactly as allocated. 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: TSC frequency is ~2200000 KHz 00:04:23.344 EAL: Main lcore 0 is ready (tid=7f4575f08a00;cpuset=[0]) 00:04:23.344 EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.344 EAL: Restoring previous memory policy: 0 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.344 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.344 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.344 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.344 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.344 00:04:23.344 00:04:23.344 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.344 http://cunit.sourceforge.net/ 00:04:23.344 00:04:23.344 00:04:23.344 Suite: components_suite 00:04:23.344 Test: vtophys_malloc_test ...passed 00:04:23.344 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.344 EAL: Restoring previous memory policy: 4 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.344 EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.344 EAL: Restoring previous memory policy: 4 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.344 EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.344 EAL: Restoring previous memory policy: 4 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.344 EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.344 EAL: Restoring previous memory policy: 4 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.344 EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.344 EAL: Restoring previous memory policy: 4 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.344 EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.344 EAL: Restoring previous memory policy: 4 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.344 EAL: request: mp_malloc_sync 00:04:23.344 EAL: No shared files mode enabled, IPC is disabled 00:04:23.344 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.344 EAL: Trying to obtain current memory policy. 00:04:23.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.604 EAL: Restoring previous memory policy: 4 00:04:23.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.604 EAL: request: mp_malloc_sync 00:04:23.604 EAL: No shared files mode enabled, IPC is disabled 00:04:23.604 EAL: Heap on socket 0 was expanded by 130MB 00:04:23.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.604 EAL: request: mp_malloc_sync 00:04:23.604 EAL: No shared files mode enabled, IPC is disabled 00:04:23.604 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.604 EAL: Trying to obtain current memory policy. 00:04:23.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.604 EAL: Restoring previous memory policy: 4 00:04:23.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.604 EAL: request: mp_malloc_sync 00:04:23.604 EAL: No shared files mode enabled, IPC is disabled 00:04:23.604 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.604 EAL: request: mp_malloc_sync 00:04:23.604 EAL: No shared files mode enabled, IPC is disabled 00:04:23.604 EAL: Heap on socket 0 was shrunk by 258MB 00:04:23.604 EAL: Trying to obtain current memory policy. 00:04:23.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.863 EAL: Restoring previous memory policy: 4 00:04:23.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.863 EAL: request: mp_malloc_sync 00:04:23.863 EAL: No shared files mode enabled, IPC is disabled 00:04:23.863 EAL: Heap on socket 0 was expanded by 514MB 00:04:23.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.863 EAL: request: mp_malloc_sync 00:04:23.863 EAL: No shared files mode enabled, IPC is disabled 00:04:23.863 EAL: Heap on socket 0 was shrunk by 514MB 00:04:23.863 EAL: Trying to obtain current memory policy. 00:04:23.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.131 EAL: Restoring previous memory policy: 4 00:04:24.131 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.131 EAL: request: mp_malloc_sync 00:04:24.131 EAL: No shared files mode enabled, IPC is disabled 00:04:24.131 EAL: Heap on socket 0 was expanded by 1026MB 00:04:24.131 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.131 passed 00:04:24.131 00:04:24.131 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.132 suites 1 1 n/a 0 0 00:04:24.132 tests 2 2 2 0 0 00:04:24.132 asserts 5162 5162 5162 0 n/a 00:04:24.132 00:04:24.132 Elapsed time = 0.874 seconds 00:04:24.132 EAL: request: mp_malloc_sync 00:04:24.132 EAL: No shared files mode enabled, IPC is disabled 00:04:24.132 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.132 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.132 EAL: request: mp_malloc_sync 00:04:24.132 EAL: No shared files mode enabled, IPC is disabled 00:04:24.132 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.132 EAL: No shared files mode enabled, IPC is disabled 00:04:24.132 EAL: No shared files mode enabled, IPC is disabled 00:04:24.132 EAL: No shared files mode enabled, IPC is disabled 00:04:24.132 00:04:24.132 real 0m1.080s 00:04:24.132 user 0m0.520s 00:04:24.132 sys 0m0.422s 00:04:24.132 07:08:33 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.132 07:08:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:24.132 ************************************ 00:04:24.132 END TEST env_vtophys 00:04:24.132 ************************************ 00:04:24.391 07:08:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.391 07:08:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.391 07:08:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.391 07:08:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.391 07:08:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.391 ************************************ 00:04:24.391 START TEST env_pci 00:04:24.391 ************************************ 00:04:24.391 07:08:33 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.391 00:04:24.391 00:04:24.391 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.391 http://cunit.sourceforge.net/ 00:04:24.391 00:04:24.391 00:04:24.391 Suite: pci 00:04:24.391 Test: pci_hook ...[2024-07-15 07:08:33.127066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58656 has claimed it 00:04:24.391 passed 00:04:24.391 00:04:24.391 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.391 suites 1 1 n/a 0 0 00:04:24.391 tests 1 1 1 0 0 00:04:24.391 asserts 25 25 25 0 n/a 00:04:24.391 00:04:24.391 Elapsed time = 0.002 seconds 00:04:24.391 EAL: Cannot find device (10000:00:01.0) 00:04:24.391 EAL: Failed to attach device on primary process 00:04:24.391 00:04:24.391 real 0m0.018s 00:04:24.391 user 0m0.004s 00:04:24.391 sys 0m0.013s 00:04:24.391 07:08:33 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.391 07:08:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:24.391 ************************************ 00:04:24.391 END TEST env_pci 00:04:24.391 ************************************ 00:04:24.391 07:08:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.391 07:08:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.391 07:08:33 env -- env/env.sh@15 -- # uname 00:04:24.391 07:08:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.391 07:08:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.391 07:08:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.391 07:08:33 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:24.391 07:08:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.391 07:08:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.391 ************************************ 00:04:24.391 START TEST env_dpdk_post_init 00:04:24.391 ************************************ 00:04:24.391 07:08:33 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.391 EAL: Detected CPU lcores: 10 00:04:24.391 EAL: Detected NUMA nodes: 1 00:04:24.391 EAL: Detected shared linkage of DPDK 00:04:24.391 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.391 EAL: Selected IOVA mode 'PA' 00:04:24.391 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.650 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:24.650 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:24.650 Starting DPDK initialization... 00:04:24.650 Starting SPDK post initialization... 00:04:24.650 SPDK NVMe probe 00:04:24.650 Attaching to 0000:00:10.0 00:04:24.650 Attaching to 0000:00:11.0 00:04:24.650 Attached to 0000:00:10.0 00:04:24.650 Attached to 0000:00:11.0 00:04:24.650 Cleaning up... 00:04:24.650 00:04:24.650 real 0m0.177s 00:04:24.650 user 0m0.046s 00:04:24.650 sys 0m0.032s 00:04:24.650 07:08:33 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.650 07:08:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 ************************************ 00:04:24.650 END TEST env_dpdk_post_init 00:04:24.650 ************************************ 00:04:24.650 07:08:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.650 07:08:33 env -- env/env.sh@26 -- # uname 00:04:24.650 07:08:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.650 07:08:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.650 07:08:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.650 07:08:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.650 07:08:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 ************************************ 00:04:24.650 START TEST env_mem_callbacks 00:04:24.650 ************************************ 00:04:24.650 07:08:33 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.650 EAL: Detected CPU lcores: 10 00:04:24.650 EAL: Detected NUMA nodes: 1 00:04:24.650 EAL: Detected shared linkage of DPDK 00:04:24.650 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.650 EAL: Selected IOVA mode 'PA' 00:04:24.650 00:04:24.650 00:04:24.650 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.650 http://cunit.sourceforge.net/ 00:04:24.650 00:04:24.650 00:04:24.650 Suite: memory 00:04:24.650 Test: test ... 00:04:24.650 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.650 register 0x200000200000 2097152 00:04:24.650 malloc 3145728 00:04:24.650 register 0x200000400000 4194304 00:04:24.650 buf 0x200000500000 len 3145728 PASSED 00:04:24.650 malloc 64 00:04:24.650 buf 0x2000004fff40 len 64 PASSED 00:04:24.650 malloc 4194304 00:04:24.650 register 0x200000800000 6291456 00:04:24.650 buf 0x200000a00000 len 4194304 PASSED 00:04:24.650 free 0x200000500000 3145728 00:04:24.650 free 0x2000004fff40 64 00:04:24.650 unregister 0x200000400000 4194304 PASSED 00:04:24.650 free 0x200000a00000 4194304 00:04:24.650 unregister 0x200000800000 6291456 PASSED 00:04:24.650 malloc 8388608 00:04:24.650 register 0x200000400000 10485760 00:04:24.650 buf 0x200000600000 len 8388608 PASSED 00:04:24.650 free 0x200000600000 8388608 00:04:24.650 unregister 0x200000400000 10485760 PASSED 00:04:24.650 passed 00:04:24.650 00:04:24.650 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.650 suites 1 1 n/a 0 0 00:04:24.650 tests 1 1 1 0 0 00:04:24.650 asserts 15 15 15 0 n/a 00:04:24.650 00:04:24.650 Elapsed time = 0.006 seconds 00:04:24.650 00:04:24.650 real 0m0.136s 00:04:24.650 user 0m0.011s 00:04:24.650 sys 0m0.024s 00:04:24.650 07:08:33 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.650 07:08:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 ************************************ 00:04:24.650 END TEST env_mem_callbacks 00:04:24.650 ************************************ 00:04:24.650 07:08:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.650 00:04:24.650 real 0m1.980s 00:04:24.650 user 0m0.909s 00:04:24.650 sys 0m0.712s 00:04:24.650 07:08:33 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.650 07:08:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 ************************************ 00:04:24.650 END TEST env 00:04:24.650 ************************************ 00:04:24.909 07:08:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.909 07:08:33 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:24.909 07:08:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.909 07:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.909 07:08:33 -- common/autotest_common.sh@10 -- # set +x 00:04:24.909 ************************************ 00:04:24.909 START TEST rpc 00:04:24.909 ************************************ 00:04:24.909 07:08:33 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:24.909 * Looking for test storage... 00:04:24.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.909 07:08:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58771 00:04:24.909 07:08:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:24.909 07:08:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.909 07:08:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58771 00:04:24.909 07:08:33 rpc -- common/autotest_common.sh@829 -- # '[' -z 58771 ']' 00:04:24.909 07:08:33 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.909 07:08:33 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.909 07:08:33 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.909 07:08:33 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.909 07:08:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.909 [2024-07-15 07:08:33.784678] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:24.909 [2024-07-15 07:08:33.784772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58771 ] 00:04:25.167 [2024-07-15 07:08:33.920506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.167 [2024-07-15 07:08:33.979674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.167 [2024-07-15 07:08:33.979741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58771' to capture a snapshot of events at runtime. 00:04:25.167 [2024-07-15 07:08:33.979751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.167 [2024-07-15 07:08:33.979758] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.167 [2024-07-15 07:08:33.979764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58771 for offline analysis/debug. 00:04:25.167 [2024-07-15 07:08:33.979787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.167 [2024-07-15 07:08:34.009181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:25.425 07:08:34 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.425 07:08:34 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:25.425 07:08:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.425 07:08:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.425 07:08:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.425 07:08:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.425 07:08:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.425 07:08:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.425 07:08:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.425 ************************************ 00:04:25.425 START TEST rpc_integrity 00:04:25.425 ************************************ 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.425 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.425 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.425 { 00:04:25.425 "name": "Malloc0", 00:04:25.425 "aliases": [ 00:04:25.425 "0517e46b-3eaf-4dd2-a00a-955b3212e922" 00:04:25.425 ], 00:04:25.425 "product_name": "Malloc disk", 00:04:25.425 "block_size": 512, 00:04:25.425 "num_blocks": 16384, 00:04:25.425 "uuid": "0517e46b-3eaf-4dd2-a00a-955b3212e922", 00:04:25.425 "assigned_rate_limits": { 00:04:25.425 "rw_ios_per_sec": 0, 00:04:25.425 "rw_mbytes_per_sec": 0, 00:04:25.425 "r_mbytes_per_sec": 0, 00:04:25.425 "w_mbytes_per_sec": 0 00:04:25.425 }, 00:04:25.426 "claimed": false, 00:04:25.426 "zoned": false, 00:04:25.426 "supported_io_types": { 00:04:25.426 "read": true, 00:04:25.426 "write": true, 00:04:25.426 "unmap": true, 00:04:25.426 "flush": true, 00:04:25.426 "reset": true, 00:04:25.426 "nvme_admin": false, 00:04:25.426 "nvme_io": false, 00:04:25.426 "nvme_io_md": false, 00:04:25.426 "write_zeroes": true, 00:04:25.426 "zcopy": true, 00:04:25.426 "get_zone_info": false, 00:04:25.426 "zone_management": false, 00:04:25.426 "zone_append": false, 00:04:25.426 "compare": false, 00:04:25.426 "compare_and_write": false, 00:04:25.426 "abort": true, 00:04:25.426 "seek_hole": false, 00:04:25.426 "seek_data": false, 00:04:25.426 "copy": true, 00:04:25.426 "nvme_iov_md": false 00:04:25.426 }, 00:04:25.426 "memory_domains": [ 00:04:25.426 { 00:04:25.426 "dma_device_id": "system", 00:04:25.426 "dma_device_type": 1 00:04:25.426 }, 00:04:25.426 { 00:04:25.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.426 "dma_device_type": 2 00:04:25.426 } 00:04:25.426 ], 00:04:25.426 "driver_specific": {} 00:04:25.426 } 00:04:25.426 ]' 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.426 [2024-07-15 07:08:34.287793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:25.426 [2024-07-15 07:08:34.287870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.426 [2024-07-15 07:08:34.287889] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1891da0 00:04:25.426 [2024-07-15 07:08:34.287899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.426 [2024-07-15 07:08:34.289366] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.426 [2024-07-15 07:08:34.289419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.426 Passthru0 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.426 { 00:04:25.426 "name": "Malloc0", 00:04:25.426 "aliases": [ 00:04:25.426 "0517e46b-3eaf-4dd2-a00a-955b3212e922" 00:04:25.426 ], 00:04:25.426 "product_name": "Malloc disk", 00:04:25.426 "block_size": 512, 00:04:25.426 "num_blocks": 16384, 00:04:25.426 "uuid": "0517e46b-3eaf-4dd2-a00a-955b3212e922", 00:04:25.426 "assigned_rate_limits": { 00:04:25.426 "rw_ios_per_sec": 0, 00:04:25.426 "rw_mbytes_per_sec": 0, 00:04:25.426 "r_mbytes_per_sec": 0, 00:04:25.426 "w_mbytes_per_sec": 0 00:04:25.426 }, 00:04:25.426 "claimed": true, 00:04:25.426 "claim_type": "exclusive_write", 00:04:25.426 "zoned": false, 00:04:25.426 "supported_io_types": { 00:04:25.426 "read": true, 00:04:25.426 "write": true, 00:04:25.426 "unmap": true, 00:04:25.426 "flush": true, 00:04:25.426 "reset": true, 00:04:25.426 "nvme_admin": false, 00:04:25.426 "nvme_io": false, 00:04:25.426 "nvme_io_md": false, 00:04:25.426 "write_zeroes": true, 00:04:25.426 "zcopy": true, 00:04:25.426 "get_zone_info": false, 00:04:25.426 "zone_management": false, 00:04:25.426 "zone_append": false, 00:04:25.426 "compare": false, 00:04:25.426 "compare_and_write": false, 00:04:25.426 "abort": true, 00:04:25.426 "seek_hole": false, 00:04:25.426 "seek_data": false, 00:04:25.426 "copy": true, 00:04:25.426 "nvme_iov_md": false 00:04:25.426 }, 00:04:25.426 "memory_domains": [ 00:04:25.426 { 00:04:25.426 "dma_device_id": "system", 00:04:25.426 "dma_device_type": 1 00:04:25.426 }, 00:04:25.426 { 00:04:25.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.426 "dma_device_type": 2 00:04:25.426 } 00:04:25.426 ], 00:04:25.426 "driver_specific": {} 00:04:25.426 }, 00:04:25.426 { 00:04:25.426 "name": "Passthru0", 00:04:25.426 "aliases": [ 00:04:25.426 "57e095b8-b4b6-5a2f-bacc-b8eeba8e94a2" 00:04:25.426 ], 00:04:25.426 "product_name": "passthru", 00:04:25.426 "block_size": 512, 00:04:25.426 "num_blocks": 16384, 00:04:25.426 "uuid": "57e095b8-b4b6-5a2f-bacc-b8eeba8e94a2", 00:04:25.426 "assigned_rate_limits": { 00:04:25.426 "rw_ios_per_sec": 0, 00:04:25.426 "rw_mbytes_per_sec": 0, 00:04:25.426 "r_mbytes_per_sec": 0, 00:04:25.426 "w_mbytes_per_sec": 0 00:04:25.426 }, 00:04:25.426 "claimed": false, 00:04:25.426 "zoned": false, 00:04:25.426 "supported_io_types": { 00:04:25.426 "read": true, 00:04:25.426 "write": true, 00:04:25.426 "unmap": true, 00:04:25.426 "flush": true, 00:04:25.426 "reset": true, 00:04:25.426 "nvme_admin": false, 00:04:25.426 "nvme_io": false, 00:04:25.426 "nvme_io_md": false, 00:04:25.426 "write_zeroes": true, 00:04:25.426 "zcopy": true, 00:04:25.426 "get_zone_info": false, 00:04:25.426 "zone_management": false, 00:04:25.426 "zone_append": false, 00:04:25.426 "compare": false, 00:04:25.426 "compare_and_write": false, 00:04:25.426 "abort": true, 00:04:25.426 "seek_hole": false, 00:04:25.426 "seek_data": false, 00:04:25.426 "copy": true, 00:04:25.426 "nvme_iov_md": false 00:04:25.426 }, 00:04:25.426 "memory_domains": [ 00:04:25.426 { 00:04:25.426 "dma_device_id": "system", 00:04:25.426 "dma_device_type": 1 00:04:25.426 }, 00:04:25.426 { 00:04:25.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.426 "dma_device_type": 2 00:04:25.426 } 00:04:25.426 ], 00:04:25.426 "driver_specific": { 00:04:25.426 "passthru": { 00:04:25.426 "name": "Passthru0", 00:04:25.426 "base_bdev_name": "Malloc0" 00:04:25.426 } 00:04:25.426 } 00:04:25.426 } 00:04:25.426 ]' 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.426 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.426 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.685 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.685 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.685 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.685 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.685 07:08:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.685 00:04:25.685 real 0m0.315s 00:04:25.685 user 0m0.211s 00:04:25.685 sys 0m0.041s 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.685 07:08:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 ************************************ 00:04:25.685 END TEST rpc_integrity 00:04:25.685 ************************************ 00:04:25.685 07:08:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.685 07:08:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:25.685 07:08:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.685 07:08:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.685 07:08:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 ************************************ 00:04:25.685 START TEST rpc_plugins 00:04:25.685 ************************************ 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:25.685 { 00:04:25.685 "name": "Malloc1", 00:04:25.685 "aliases": [ 00:04:25.685 "2ac8001c-4003-4cc8-a597-7899c7251c09" 00:04:25.685 ], 00:04:25.685 "product_name": "Malloc disk", 00:04:25.685 "block_size": 4096, 00:04:25.685 "num_blocks": 256, 00:04:25.685 "uuid": "2ac8001c-4003-4cc8-a597-7899c7251c09", 00:04:25.685 "assigned_rate_limits": { 00:04:25.685 "rw_ios_per_sec": 0, 00:04:25.685 "rw_mbytes_per_sec": 0, 00:04:25.685 "r_mbytes_per_sec": 0, 00:04:25.685 "w_mbytes_per_sec": 0 00:04:25.685 }, 00:04:25.685 "claimed": false, 00:04:25.685 "zoned": false, 00:04:25.685 "supported_io_types": { 00:04:25.685 "read": true, 00:04:25.685 "write": true, 00:04:25.685 "unmap": true, 00:04:25.685 "flush": true, 00:04:25.685 "reset": true, 00:04:25.685 "nvme_admin": false, 00:04:25.685 "nvme_io": false, 00:04:25.685 "nvme_io_md": false, 00:04:25.685 "write_zeroes": true, 00:04:25.685 "zcopy": true, 00:04:25.685 "get_zone_info": false, 00:04:25.685 "zone_management": false, 00:04:25.685 "zone_append": false, 00:04:25.685 "compare": false, 00:04:25.685 "compare_and_write": false, 00:04:25.685 "abort": true, 00:04:25.685 "seek_hole": false, 00:04:25.685 "seek_data": false, 00:04:25.685 "copy": true, 00:04:25.685 "nvme_iov_md": false 00:04:25.685 }, 00:04:25.685 "memory_domains": [ 00:04:25.685 { 00:04:25.685 "dma_device_id": "system", 00:04:25.685 "dma_device_type": 1 00:04:25.685 }, 00:04:25.685 { 00:04:25.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.685 "dma_device_type": 2 00:04:25.685 } 00:04:25.685 ], 00:04:25.685 "driver_specific": {} 00:04:25.685 } 00:04:25.685 ]' 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.685 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:25.685 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:25.944 07:08:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:25.944 00:04:25.944 real 0m0.159s 00:04:25.944 user 0m0.107s 00:04:25.944 sys 0m0.016s 00:04:25.944 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.944 ************************************ 00:04:25.944 END TEST rpc_plugins 00:04:25.944 ************************************ 00:04:25.944 07:08:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.944 07:08:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.944 07:08:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:25.944 07:08:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.944 07:08:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.944 07:08:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.944 ************************************ 00:04:25.944 START TEST rpc_trace_cmd_test 00:04:25.944 ************************************ 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:25.944 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58771", 00:04:25.944 "tpoint_group_mask": "0x8", 00:04:25.944 "iscsi_conn": { 00:04:25.944 "mask": "0x2", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "scsi": { 00:04:25.944 "mask": "0x4", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "bdev": { 00:04:25.944 "mask": "0x8", 00:04:25.944 "tpoint_mask": "0xffffffffffffffff" 00:04:25.944 }, 00:04:25.944 "nvmf_rdma": { 00:04:25.944 "mask": "0x10", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "nvmf_tcp": { 00:04:25.944 "mask": "0x20", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "ftl": { 00:04:25.944 "mask": "0x40", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "blobfs": { 00:04:25.944 "mask": "0x80", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "dsa": { 00:04:25.944 "mask": "0x200", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "thread": { 00:04:25.944 "mask": "0x400", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "nvme_pcie": { 00:04:25.944 "mask": "0x800", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "iaa": { 00:04:25.944 "mask": "0x1000", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "nvme_tcp": { 00:04:25.944 "mask": "0x2000", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "bdev_nvme": { 00:04:25.944 "mask": "0x4000", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 }, 00:04:25.944 "sock": { 00:04:25.944 "mask": "0x8000", 00:04:25.944 "tpoint_mask": "0x0" 00:04:25.944 } 00:04:25.944 }' 00:04:25.944 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:25.945 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:25.945 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:25.945 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:25.945 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.203 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.203 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.203 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.203 07:08:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.203 07:08:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.203 00:04:26.203 real 0m0.281s 00:04:26.203 user 0m0.243s 00:04:26.203 sys 0m0.028s 00:04:26.203 07:08:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.203 ************************************ 00:04:26.203 END TEST rpc_trace_cmd_test 00:04:26.203 07:08:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.203 ************************************ 00:04:26.203 07:08:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.203 07:08:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.203 07:08:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.203 07:08:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.204 07:08:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.204 07:08:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.204 07:08:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.204 ************************************ 00:04:26.204 START TEST rpc_daemon_integrity 00:04:26.204 ************************************ 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.204 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.204 { 00:04:26.204 "name": "Malloc2", 00:04:26.204 "aliases": [ 00:04:26.204 "27b66a67-d8fe-4b22-9ac4-55029faad72f" 00:04:26.204 ], 00:04:26.204 "product_name": "Malloc disk", 00:04:26.204 "block_size": 512, 00:04:26.204 "num_blocks": 16384, 00:04:26.204 "uuid": "27b66a67-d8fe-4b22-9ac4-55029faad72f", 00:04:26.204 "assigned_rate_limits": { 00:04:26.204 "rw_ios_per_sec": 0, 00:04:26.204 "rw_mbytes_per_sec": 0, 00:04:26.204 "r_mbytes_per_sec": 0, 00:04:26.204 "w_mbytes_per_sec": 0 00:04:26.204 }, 00:04:26.204 "claimed": false, 00:04:26.204 "zoned": false, 00:04:26.204 "supported_io_types": { 00:04:26.204 "read": true, 00:04:26.204 "write": true, 00:04:26.204 "unmap": true, 00:04:26.204 "flush": true, 00:04:26.204 "reset": true, 00:04:26.204 "nvme_admin": false, 00:04:26.204 "nvme_io": false, 00:04:26.204 "nvme_io_md": false, 00:04:26.204 "write_zeroes": true, 00:04:26.204 "zcopy": true, 00:04:26.204 "get_zone_info": false, 00:04:26.204 "zone_management": false, 00:04:26.204 "zone_append": false, 00:04:26.204 "compare": false, 00:04:26.204 "compare_and_write": false, 00:04:26.204 "abort": true, 00:04:26.204 "seek_hole": false, 00:04:26.204 "seek_data": false, 00:04:26.204 "copy": true, 00:04:26.204 "nvme_iov_md": false 00:04:26.204 }, 00:04:26.204 "memory_domains": [ 00:04:26.204 { 00:04:26.204 "dma_device_id": "system", 00:04:26.204 "dma_device_type": 1 00:04:26.204 }, 00:04:26.204 { 00:04:26.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.204 "dma_device_type": 2 00:04:26.204 } 00:04:26.204 ], 00:04:26.204 "driver_specific": {} 00:04:26.204 } 00:04:26.204 ]' 00:04:26.462 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.462 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.462 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.462 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.462 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.462 [2024-07-15 07:08:35.208505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.462 [2024-07-15 07:08:35.208568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.462 [2024-07-15 07:08:35.208588] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f6be0 00:04:26.462 [2024-07-15 07:08:35.208597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.462 [2024-07-15 07:08:35.209988] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.462 [2024-07-15 07:08:35.210024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.462 Passthru0 00:04:26.462 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.462 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.463 { 00:04:26.463 "name": "Malloc2", 00:04:26.463 "aliases": [ 00:04:26.463 "27b66a67-d8fe-4b22-9ac4-55029faad72f" 00:04:26.463 ], 00:04:26.463 "product_name": "Malloc disk", 00:04:26.463 "block_size": 512, 00:04:26.463 "num_blocks": 16384, 00:04:26.463 "uuid": "27b66a67-d8fe-4b22-9ac4-55029faad72f", 00:04:26.463 "assigned_rate_limits": { 00:04:26.463 "rw_ios_per_sec": 0, 00:04:26.463 "rw_mbytes_per_sec": 0, 00:04:26.463 "r_mbytes_per_sec": 0, 00:04:26.463 "w_mbytes_per_sec": 0 00:04:26.463 }, 00:04:26.463 "claimed": true, 00:04:26.463 "claim_type": "exclusive_write", 00:04:26.463 "zoned": false, 00:04:26.463 "supported_io_types": { 00:04:26.463 "read": true, 00:04:26.463 "write": true, 00:04:26.463 "unmap": true, 00:04:26.463 "flush": true, 00:04:26.463 "reset": true, 00:04:26.463 "nvme_admin": false, 00:04:26.463 "nvme_io": false, 00:04:26.463 "nvme_io_md": false, 00:04:26.463 "write_zeroes": true, 00:04:26.463 "zcopy": true, 00:04:26.463 "get_zone_info": false, 00:04:26.463 "zone_management": false, 00:04:26.463 "zone_append": false, 00:04:26.463 "compare": false, 00:04:26.463 "compare_and_write": false, 00:04:26.463 "abort": true, 00:04:26.463 "seek_hole": false, 00:04:26.463 "seek_data": false, 00:04:26.463 "copy": true, 00:04:26.463 "nvme_iov_md": false 00:04:26.463 }, 00:04:26.463 "memory_domains": [ 00:04:26.463 { 00:04:26.463 "dma_device_id": "system", 00:04:26.463 "dma_device_type": 1 00:04:26.463 }, 00:04:26.463 { 00:04:26.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.463 "dma_device_type": 2 00:04:26.463 } 00:04:26.463 ], 00:04:26.463 "driver_specific": {} 00:04:26.463 }, 00:04:26.463 { 00:04:26.463 "name": "Passthru0", 00:04:26.463 "aliases": [ 00:04:26.463 "473dcc4f-af0d-5fcd-80a9-a9e30a28643e" 00:04:26.463 ], 00:04:26.463 "product_name": "passthru", 00:04:26.463 "block_size": 512, 00:04:26.463 "num_blocks": 16384, 00:04:26.463 "uuid": "473dcc4f-af0d-5fcd-80a9-a9e30a28643e", 00:04:26.463 "assigned_rate_limits": { 00:04:26.463 "rw_ios_per_sec": 0, 00:04:26.463 "rw_mbytes_per_sec": 0, 00:04:26.463 "r_mbytes_per_sec": 0, 00:04:26.463 "w_mbytes_per_sec": 0 00:04:26.463 }, 00:04:26.463 "claimed": false, 00:04:26.463 "zoned": false, 00:04:26.463 "supported_io_types": { 00:04:26.463 "read": true, 00:04:26.463 "write": true, 00:04:26.463 "unmap": true, 00:04:26.463 "flush": true, 00:04:26.463 "reset": true, 00:04:26.463 "nvme_admin": false, 00:04:26.463 "nvme_io": false, 00:04:26.463 "nvme_io_md": false, 00:04:26.463 "write_zeroes": true, 00:04:26.463 "zcopy": true, 00:04:26.463 "get_zone_info": false, 00:04:26.463 "zone_management": false, 00:04:26.463 "zone_append": false, 00:04:26.463 "compare": false, 00:04:26.463 "compare_and_write": false, 00:04:26.463 "abort": true, 00:04:26.463 "seek_hole": false, 00:04:26.463 "seek_data": false, 00:04:26.463 "copy": true, 00:04:26.463 "nvme_iov_md": false 00:04:26.463 }, 00:04:26.463 "memory_domains": [ 00:04:26.463 { 00:04:26.463 "dma_device_id": "system", 00:04:26.463 "dma_device_type": 1 00:04:26.463 }, 00:04:26.463 { 00:04:26.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.463 "dma_device_type": 2 00:04:26.463 } 00:04:26.463 ], 00:04:26.463 "driver_specific": { 00:04:26.463 "passthru": { 00:04:26.463 "name": "Passthru0", 00:04:26.463 "base_bdev_name": "Malloc2" 00:04:26.463 } 00:04:26.463 } 00:04:26.463 } 00:04:26.463 ]' 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.463 00:04:26.463 real 0m0.327s 00:04:26.463 user 0m0.225s 00:04:26.463 sys 0m0.037s 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.463 07:08:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.463 ************************************ 00:04:26.463 END TEST rpc_daemon_integrity 00:04:26.463 ************************************ 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.722 07:08:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.722 07:08:35 rpc -- rpc/rpc.sh@84 -- # killprocess 58771 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@948 -- # '[' -z 58771 ']' 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@952 -- # kill -0 58771 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@953 -- # uname 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58771 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.722 killing process with pid 58771 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58771' 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@967 -- # kill 58771 00:04:26.722 07:08:35 rpc -- common/autotest_common.sh@972 -- # wait 58771 00:04:26.981 00:04:26.981 real 0m2.068s 00:04:26.981 user 0m2.841s 00:04:26.981 sys 0m0.515s 00:04:26.981 07:08:35 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.981 07:08:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.981 ************************************ 00:04:26.981 END TEST rpc 00:04:26.981 ************************************ 00:04:26.981 07:08:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.981 07:08:35 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:26.981 07:08:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.981 07:08:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.981 07:08:35 -- common/autotest_common.sh@10 -- # set +x 00:04:26.981 ************************************ 00:04:26.981 START TEST skip_rpc 00:04:26.981 ************************************ 00:04:26.981 07:08:35 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:26.981 * Looking for test storage... 00:04:26.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.981 07:08:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.981 07:08:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.981 07:08:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:26.981 07:08:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.981 07:08:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.981 07:08:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.981 ************************************ 00:04:26.981 START TEST skip_rpc 00:04:26.982 ************************************ 00:04:26.982 07:08:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:26.982 07:08:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58956 00:04:26.982 07:08:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.982 07:08:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:26.982 07:08:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:26.982 [2024-07-15 07:08:35.929753] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:26.982 [2024-07-15 07:08:35.930399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58956 ] 00:04:27.239 [2024-07-15 07:08:36.067543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.239 [2024-07-15 07:08:36.131020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.239 [2024-07-15 07:08:36.162277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58956 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58956 ']' 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58956 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58956 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.501 killing process with pid 58956 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58956' 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58956 00:04:32.501 07:08:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58956 00:04:32.501 00:04:32.501 real 0m5.293s 00:04:32.501 user 0m5.023s 00:04:32.501 sys 0m0.180s 00:04:32.501 07:08:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.501 07:08:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.501 ************************************ 00:04:32.501 END TEST skip_rpc 00:04:32.501 ************************************ 00:04:32.501 07:08:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.501 07:08:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.501 07:08:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.501 07:08:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.501 07:08:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.501 ************************************ 00:04:32.501 START TEST skip_rpc_with_json 00:04:32.501 ************************************ 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59037 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59037 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59037 ']' 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.501 07:08:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.501 [2024-07-15 07:08:41.263834] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:32.501 [2024-07-15 07:08:41.263939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59037 ] 00:04:32.501 [2024-07-15 07:08:41.397837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.758 [2024-07-15 07:08:41.457468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.758 [2024-07-15 07:08:41.489190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.321 [2024-07-15 07:08:42.240472] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.321 request: 00:04:33.321 { 00:04:33.321 "trtype": "tcp", 00:04:33.321 "method": "nvmf_get_transports", 00:04:33.321 "req_id": 1 00:04:33.321 } 00:04:33.321 Got JSON-RPC error response 00:04:33.321 response: 00:04:33.321 { 00:04:33.321 "code": -19, 00:04:33.321 "message": "No such device" 00:04:33.321 } 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.321 [2024-07-15 07:08:42.252558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.321 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.580 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.580 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.580 { 00:04:33.580 "subsystems": [ 00:04:33.580 { 00:04:33.580 "subsystem": "keyring", 00:04:33.580 "config": [] 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "subsystem": "iobuf", 00:04:33.580 "config": [ 00:04:33.580 { 00:04:33.580 "method": "iobuf_set_options", 00:04:33.580 "params": { 00:04:33.580 "small_pool_count": 8192, 00:04:33.580 "large_pool_count": 1024, 00:04:33.580 "small_bufsize": 8192, 00:04:33.580 "large_bufsize": 135168 00:04:33.580 } 00:04:33.580 } 00:04:33.580 ] 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "subsystem": "sock", 00:04:33.580 "config": [ 00:04:33.580 { 00:04:33.580 "method": "sock_set_default_impl", 00:04:33.580 "params": { 00:04:33.580 "impl_name": "uring" 00:04:33.580 } 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "method": "sock_impl_set_options", 00:04:33.580 "params": { 00:04:33.580 "impl_name": "ssl", 00:04:33.580 "recv_buf_size": 4096, 00:04:33.580 "send_buf_size": 4096, 00:04:33.580 "enable_recv_pipe": true, 00:04:33.580 "enable_quickack": false, 00:04:33.580 "enable_placement_id": 0, 00:04:33.580 "enable_zerocopy_send_server": true, 00:04:33.580 "enable_zerocopy_send_client": false, 00:04:33.580 "zerocopy_threshold": 0, 00:04:33.580 "tls_version": 0, 00:04:33.580 "enable_ktls": false 00:04:33.580 } 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "method": "sock_impl_set_options", 00:04:33.580 "params": { 00:04:33.580 "impl_name": "posix", 00:04:33.580 "recv_buf_size": 2097152, 00:04:33.580 "send_buf_size": 2097152, 00:04:33.580 "enable_recv_pipe": true, 00:04:33.580 "enable_quickack": false, 00:04:33.580 "enable_placement_id": 0, 00:04:33.580 "enable_zerocopy_send_server": true, 00:04:33.580 "enable_zerocopy_send_client": false, 00:04:33.580 "zerocopy_threshold": 0, 00:04:33.580 "tls_version": 0, 00:04:33.580 "enable_ktls": false 00:04:33.580 } 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "method": "sock_impl_set_options", 00:04:33.580 "params": { 00:04:33.580 "impl_name": "uring", 00:04:33.580 "recv_buf_size": 2097152, 00:04:33.580 "send_buf_size": 2097152, 00:04:33.580 "enable_recv_pipe": true, 00:04:33.580 "enable_quickack": false, 00:04:33.580 "enable_placement_id": 0, 00:04:33.580 "enable_zerocopy_send_server": false, 00:04:33.580 "enable_zerocopy_send_client": false, 00:04:33.580 "zerocopy_threshold": 0, 00:04:33.580 "tls_version": 0, 00:04:33.580 "enable_ktls": false 00:04:33.580 } 00:04:33.580 } 00:04:33.580 ] 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "subsystem": "vmd", 00:04:33.580 "config": [] 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "subsystem": "accel", 00:04:33.580 "config": [ 00:04:33.580 { 00:04:33.580 "method": "accel_set_options", 00:04:33.580 "params": { 00:04:33.580 "small_cache_size": 128, 00:04:33.580 "large_cache_size": 16, 00:04:33.580 "task_count": 2048, 00:04:33.580 "sequence_count": 2048, 00:04:33.580 "buf_count": 2048 00:04:33.580 } 00:04:33.580 } 00:04:33.580 ] 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "subsystem": "bdev", 00:04:33.580 "config": [ 00:04:33.580 { 00:04:33.580 "method": "bdev_set_options", 00:04:33.580 "params": { 00:04:33.580 "bdev_io_pool_size": 65535, 00:04:33.580 "bdev_io_cache_size": 256, 00:04:33.580 "bdev_auto_examine": true, 00:04:33.580 "iobuf_small_cache_size": 128, 00:04:33.580 "iobuf_large_cache_size": 16 00:04:33.580 } 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "method": "bdev_raid_set_options", 00:04:33.580 "params": { 00:04:33.580 "process_window_size_kb": 1024 00:04:33.580 } 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "method": "bdev_iscsi_set_options", 00:04:33.580 "params": { 00:04:33.580 "timeout_sec": 30 00:04:33.580 } 00:04:33.580 }, 00:04:33.580 { 00:04:33.580 "method": "bdev_nvme_set_options", 00:04:33.580 "params": { 00:04:33.580 "action_on_timeout": "none", 00:04:33.580 "timeout_us": 0, 00:04:33.580 "timeout_admin_us": 0, 00:04:33.580 "keep_alive_timeout_ms": 10000, 00:04:33.580 "arbitration_burst": 0, 00:04:33.580 "low_priority_weight": 0, 00:04:33.580 "medium_priority_weight": 0, 00:04:33.580 "high_priority_weight": 0, 00:04:33.580 "nvme_adminq_poll_period_us": 10000, 00:04:33.580 "nvme_ioq_poll_period_us": 0, 00:04:33.580 "io_queue_requests": 0, 00:04:33.580 "delay_cmd_submit": true, 00:04:33.580 "transport_retry_count": 4, 00:04:33.580 "bdev_retry_count": 3, 00:04:33.580 "transport_ack_timeout": 0, 00:04:33.580 "ctrlr_loss_timeout_sec": 0, 00:04:33.580 "reconnect_delay_sec": 0, 00:04:33.580 "fast_io_fail_timeout_sec": 0, 00:04:33.580 "disable_auto_failback": false, 00:04:33.580 "generate_uuids": false, 00:04:33.580 "transport_tos": 0, 00:04:33.581 "nvme_error_stat": false, 00:04:33.581 "rdma_srq_size": 0, 00:04:33.581 "io_path_stat": false, 00:04:33.581 "allow_accel_sequence": false, 00:04:33.581 "rdma_max_cq_size": 0, 00:04:33.581 "rdma_cm_event_timeout_ms": 0, 00:04:33.581 "dhchap_digests": [ 00:04:33.581 "sha256", 00:04:33.581 "sha384", 00:04:33.581 "sha512" 00:04:33.581 ], 00:04:33.581 "dhchap_dhgroups": [ 00:04:33.581 "null", 00:04:33.581 "ffdhe2048", 00:04:33.581 "ffdhe3072", 00:04:33.581 "ffdhe4096", 00:04:33.581 "ffdhe6144", 00:04:33.581 "ffdhe8192" 00:04:33.581 ] 00:04:33.581 } 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "method": "bdev_nvme_set_hotplug", 00:04:33.581 "params": { 00:04:33.581 "period_us": 100000, 00:04:33.581 "enable": false 00:04:33.581 } 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "method": "bdev_wait_for_examine" 00:04:33.581 } 00:04:33.581 ] 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "scsi", 00:04:33.581 "config": null 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "scheduler", 00:04:33.581 "config": [ 00:04:33.581 { 00:04:33.581 "method": "framework_set_scheduler", 00:04:33.581 "params": { 00:04:33.581 "name": "static" 00:04:33.581 } 00:04:33.581 } 00:04:33.581 ] 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "vhost_scsi", 00:04:33.581 "config": [] 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "vhost_blk", 00:04:33.581 "config": [] 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "ublk", 00:04:33.581 "config": [] 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "nbd", 00:04:33.581 "config": [] 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "nvmf", 00:04:33.581 "config": [ 00:04:33.581 { 00:04:33.581 "method": "nvmf_set_config", 00:04:33.581 "params": { 00:04:33.581 "discovery_filter": "match_any", 00:04:33.581 "admin_cmd_passthru": { 00:04:33.581 "identify_ctrlr": false 00:04:33.581 } 00:04:33.581 } 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "method": "nvmf_set_max_subsystems", 00:04:33.581 "params": { 00:04:33.581 "max_subsystems": 1024 00:04:33.581 } 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "method": "nvmf_set_crdt", 00:04:33.581 "params": { 00:04:33.581 "crdt1": 0, 00:04:33.581 "crdt2": 0, 00:04:33.581 "crdt3": 0 00:04:33.581 } 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "method": "nvmf_create_transport", 00:04:33.581 "params": { 00:04:33.581 "trtype": "TCP", 00:04:33.581 "max_queue_depth": 128, 00:04:33.581 "max_io_qpairs_per_ctrlr": 127, 00:04:33.581 "in_capsule_data_size": 4096, 00:04:33.581 "max_io_size": 131072, 00:04:33.581 "io_unit_size": 131072, 00:04:33.581 "max_aq_depth": 128, 00:04:33.581 "num_shared_buffers": 511, 00:04:33.581 "buf_cache_size": 4294967295, 00:04:33.581 "dif_insert_or_strip": false, 00:04:33.581 "zcopy": false, 00:04:33.581 "c2h_success": true, 00:04:33.581 "sock_priority": 0, 00:04:33.581 "abort_timeout_sec": 1, 00:04:33.581 "ack_timeout": 0, 00:04:33.581 "data_wr_pool_size": 0 00:04:33.581 } 00:04:33.581 } 00:04:33.581 ] 00:04:33.581 }, 00:04:33.581 { 00:04:33.581 "subsystem": "iscsi", 00:04:33.581 "config": [ 00:04:33.581 { 00:04:33.581 "method": "iscsi_set_options", 00:04:33.581 "params": { 00:04:33.581 "node_base": "iqn.2016-06.io.spdk", 00:04:33.581 "max_sessions": 128, 00:04:33.581 "max_connections_per_session": 2, 00:04:33.581 "max_queue_depth": 64, 00:04:33.581 "default_time2wait": 2, 00:04:33.581 "default_time2retain": 20, 00:04:33.581 "first_burst_length": 8192, 00:04:33.581 "immediate_data": true, 00:04:33.581 "allow_duplicated_isid": false, 00:04:33.581 "error_recovery_level": 0, 00:04:33.581 "nop_timeout": 60, 00:04:33.581 "nop_in_interval": 30, 00:04:33.581 "disable_chap": false, 00:04:33.581 "require_chap": false, 00:04:33.581 "mutual_chap": false, 00:04:33.581 "chap_group": 0, 00:04:33.581 "max_large_datain_per_connection": 64, 00:04:33.581 "max_r2t_per_connection": 4, 00:04:33.581 "pdu_pool_size": 36864, 00:04:33.581 "immediate_data_pool_size": 16384, 00:04:33.581 "data_out_pool_size": 2048 00:04:33.581 } 00:04:33.581 } 00:04:33.581 ] 00:04:33.581 } 00:04:33.581 ] 00:04:33.581 } 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59037 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59037 ']' 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59037 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59037 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.581 killing process with pid 59037 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59037' 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59037 00:04:33.581 07:08:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59037 00:04:33.837 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59064 00:04:33.837 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.837 07:08:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59064 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59064 ']' 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59064 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59064 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.170 killing process with pid 59064 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59064' 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59064 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59064 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.170 07:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.170 00:04:39.170 real 0m6.805s 00:04:39.170 user 0m6.727s 00:04:39.170 sys 0m0.457s 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.170 ************************************ 00:04:39.170 END TEST skip_rpc_with_json 00:04:39.170 ************************************ 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.170 07:08:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.170 07:08:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.170 07:08:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.170 07:08:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.170 07:08:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.170 ************************************ 00:04:39.170 START TEST skip_rpc_with_delay 00:04:39.170 ************************************ 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.170 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.170 [2024-07-15 07:08:48.119699] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.170 [2024-07-15 07:08:48.119802] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:39.428 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:39.428 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:39.428 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:39.428 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:39.428 00:04:39.428 real 0m0.078s 00:04:39.428 user 0m0.052s 00:04:39.428 sys 0m0.025s 00:04:39.428 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.428 ************************************ 00:04:39.428 END TEST skip_rpc_with_delay 00:04:39.428 ************************************ 00:04:39.428 07:08:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.428 07:08:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.428 07:08:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.428 07:08:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.428 07:08:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.428 07:08:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.428 07:08:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.428 07:08:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.428 ************************************ 00:04:39.428 START TEST exit_on_failed_rpc_init 00:04:39.428 ************************************ 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:39.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59174 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59174 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59174 ']' 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.428 07:08:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.428 [2024-07-15 07:08:48.255807] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:39.428 [2024-07-15 07:08:48.255900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59174 ] 00:04:39.685 [2024-07-15 07:08:48.393668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.685 [2024-07-15 07:08:48.466219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.685 [2024-07-15 07:08:48.502187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.620 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.620 [2024-07-15 07:08:49.350415] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:40.620 [2024-07-15 07:08:49.350546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59192 ] 00:04:40.620 [2024-07-15 07:08:49.496222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.620 [2024-07-15 07:08:49.568055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.620 [2024-07-15 07:08:49.568171] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.620 [2024-07-15 07:08:49.568189] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.620 [2024-07-15 07:08:49.568199] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.877 07:08:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59174 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59174 ']' 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59174 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59174 00:04:40.878 killing process with pid 59174 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59174' 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59174 00:04:40.878 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59174 00:04:41.135 00:04:41.135 real 0m1.768s 00:04:41.135 user 0m2.185s 00:04:41.135 sys 0m0.332s 00:04:41.135 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.135 07:08:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.135 ************************************ 00:04:41.135 END TEST exit_on_failed_rpc_init 00:04:41.135 ************************************ 00:04:41.135 07:08:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.135 07:08:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.135 ************************************ 00:04:41.135 END TEST skip_rpc 00:04:41.135 ************************************ 00:04:41.135 00:04:41.135 real 0m14.241s 00:04:41.135 user 0m14.100s 00:04:41.135 sys 0m1.162s 00:04:41.135 07:08:50 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.135 07:08:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.135 07:08:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.135 07:08:50 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.135 07:08:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.135 07:08:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.135 07:08:50 -- common/autotest_common.sh@10 -- # set +x 00:04:41.135 ************************************ 00:04:41.135 START TEST rpc_client 00:04:41.135 ************************************ 00:04:41.135 07:08:50 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.394 * Looking for test storage... 00:04:41.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:41.394 07:08:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:41.394 OK 00:04:41.394 07:08:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.394 00:04:41.394 real 0m0.104s 00:04:41.394 user 0m0.050s 00:04:41.394 sys 0m0.060s 00:04:41.394 ************************************ 00:04:41.394 END TEST rpc_client 00:04:41.394 ************************************ 00:04:41.394 07:08:50 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.394 07:08:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.394 07:08:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.394 07:08:50 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.394 07:08:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.394 07:08:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.394 07:08:50 -- common/autotest_common.sh@10 -- # set +x 00:04:41.394 ************************************ 00:04:41.394 START TEST json_config 00:04:41.394 ************************************ 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.394 07:08:50 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.394 07:08:50 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.394 07:08:50 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.394 07:08:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.394 07:08:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.394 07:08:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.394 07:08:50 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.394 07:08:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@47 -- # : 0 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.394 07:08:50 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.394 INFO: JSON configuration test init 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.394 07:08:50 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:41.394 07:08:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:41.394 07:08:50 json_config -- json_config/common.sh@10 -- # shift 00:04:41.394 07:08:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.394 07:08:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.394 07:08:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.394 07:08:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.394 07:08:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.394 07:08:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59310 00:04:41.394 07:08:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.394 Waiting for target to run... 00:04:41.394 07:08:50 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:41.394 07:08:50 json_config -- json_config/common.sh@25 -- # waitforlisten 59310 /var/tmp/spdk_tgt.sock 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@829 -- # '[' -z 59310 ']' 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.394 07:08:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.652 [2024-07-15 07:08:50.363362] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:41.652 [2024-07-15 07:08:50.363458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59310 ] 00:04:41.912 [2024-07-15 07:08:50.675741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.912 [2024-07-15 07:08:50.735150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.481 00:04:42.481 07:08:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.481 07:08:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:42.481 07:08:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.481 07:08:51 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:42.481 07:08:51 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:42.481 07:08:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.481 07:08:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.481 07:08:51 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:42.481 07:08:51 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:42.481 07:08:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.481 07:08:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.740 07:08:51 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:42.740 07:08:51 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:42.740 07:08:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.998 [2024-07-15 07:08:51.735568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.998 07:08:51 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:42.998 07:08:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.998 07:08:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.998 07:08:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.998 07:08:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.998 07:08:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.998 07:08:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.998 07:08:51 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:42.998 07:08:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.998 07:08:51 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:43.255 07:08:52 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:43.255 07:08:52 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:43.255 07:08:52 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:43.255 07:08:52 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:43.255 07:08:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.255 07:08:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:43.514 07:08:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.514 07:08:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:43.514 07:08:52 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.514 07:08:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.773 MallocForNvmf0 00:04:43.773 07:08:52 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.773 07:08:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.032 MallocForNvmf1 00:04:44.032 07:08:52 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.032 07:08:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.290 [2024-07-15 07:08:53.069572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.290 07:08:53 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:44.290 07:08:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:44.547 07:08:53 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.547 07:08:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.806 07:08:53 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.806 07:08:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.063 07:08:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.063 07:08:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.320 [2024-07-15 07:08:54.134364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:45.320 07:08:54 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:45.320 07:08:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.320 07:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.320 07:08:54 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:45.320 07:08:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.320 07:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.320 07:08:54 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:45.320 07:08:54 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.320 07:08:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.577 MallocBdevForConfigChangeCheck 00:04:45.577 07:08:54 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:45.577 07:08:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.577 07:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.836 07:08:54 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:45.836 07:08:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.094 INFO: shutting down applications... 00:04:46.094 07:08:54 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:46.094 07:08:54 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:46.094 07:08:54 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:46.094 07:08:54 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:46.094 07:08:54 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:46.352 Calling clear_iscsi_subsystem 00:04:46.352 Calling clear_nvmf_subsystem 00:04:46.352 Calling clear_nbd_subsystem 00:04:46.352 Calling clear_ublk_subsystem 00:04:46.352 Calling clear_vhost_blk_subsystem 00:04:46.352 Calling clear_vhost_scsi_subsystem 00:04:46.352 Calling clear_bdev_subsystem 00:04:46.353 07:08:55 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:46.353 07:08:55 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:46.353 07:08:55 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:46.353 07:08:55 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.353 07:08:55 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:46.353 07:08:55 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:46.917 07:08:55 json_config -- json_config/json_config.sh@345 -- # break 00:04:46.917 07:08:55 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:46.917 07:08:55 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:46.917 07:08:55 json_config -- json_config/common.sh@31 -- # local app=target 00:04:46.917 07:08:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.917 07:08:55 json_config -- json_config/common.sh@35 -- # [[ -n 59310 ]] 00:04:46.917 07:08:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59310 00:04:46.917 07:08:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.917 07:08:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.917 07:08:55 json_config -- json_config/common.sh@41 -- # kill -0 59310 00:04:46.917 07:08:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.482 07:08:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.482 07:08:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.482 07:08:56 json_config -- json_config/common.sh@41 -- # kill -0 59310 00:04:47.482 07:08:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.482 07:08:56 json_config -- json_config/common.sh@43 -- # break 00:04:47.482 07:08:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.482 SPDK target shutdown done 00:04:47.482 INFO: relaunching applications... 00:04:47.482 07:08:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.482 07:08:56 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:47.482 07:08:56 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.482 07:08:56 json_config -- json_config/common.sh@9 -- # local app=target 00:04:47.482 07:08:56 json_config -- json_config/common.sh@10 -- # shift 00:04:47.482 07:08:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.482 07:08:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.482 07:08:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.482 07:08:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.482 07:08:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.482 07:08:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59501 00:04:47.482 Waiting for target to run... 00:04:47.482 07:08:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.482 07:08:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.482 07:08:56 json_config -- json_config/common.sh@25 -- # waitforlisten 59501 /var/tmp/spdk_tgt.sock 00:04:47.482 07:08:56 json_config -- common/autotest_common.sh@829 -- # '[' -z 59501 ']' 00:04:47.482 07:08:56 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.482 07:08:56 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.482 07:08:56 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.482 07:08:56 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.482 07:08:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.482 [2024-07-15 07:08:56.247644] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:47.482 [2024-07-15 07:08:56.247741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59501 ] 00:04:47.740 [2024-07-15 07:08:56.549244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.740 [2024-07-15 07:08:56.597343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.998 [2024-07-15 07:08:56.724256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:47.998 [2024-07-15 07:08:56.915609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.998 [2024-07-15 07:08:56.947675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.562 07:08:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.562 00:04:48.562 07:08:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:48.562 07:08:57 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.562 07:08:57 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:48.562 INFO: Checking if target configuration is the same... 00:04:48.562 07:08:57 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:48.562 07:08:57 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.562 07:08:57 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:48.562 07:08:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.562 + '[' 2 -ne 2 ']' 00:04:48.562 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.562 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.562 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.563 +++ basename /dev/fd/62 00:04:48.563 ++ mktemp /tmp/62.XXX 00:04:48.563 + tmp_file_1=/tmp/62.STj 00:04:48.563 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.563 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.563 + tmp_file_2=/tmp/spdk_tgt_config.json.74O 00:04:48.563 + ret=0 00:04:48.563 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.821 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.821 + diff -u /tmp/62.STj /tmp/spdk_tgt_config.json.74O 00:04:48.821 + echo 'INFO: JSON config files are the same' 00:04:48.821 INFO: JSON config files are the same 00:04:48.821 + rm /tmp/62.STj /tmp/spdk_tgt_config.json.74O 00:04:48.821 + exit 0 00:04:48.821 07:08:57 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:48.821 INFO: changing configuration and checking if this can be detected... 00:04:48.821 07:08:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:48.821 07:08:57 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.821 07:08:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.079 07:08:57 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.079 07:08:57 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:49.079 07:08:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.079 + '[' 2 -ne 2 ']' 00:04:49.079 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:49.079 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:49.079 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:49.079 +++ basename /dev/fd/62 00:04:49.079 ++ mktemp /tmp/62.XXX 00:04:49.079 + tmp_file_1=/tmp/62.iiU 00:04:49.079 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.079 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:49.079 + tmp_file_2=/tmp/spdk_tgt_config.json.ZhO 00:04:49.079 + ret=0 00:04:49.079 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.645 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.645 + diff -u /tmp/62.iiU /tmp/spdk_tgt_config.json.ZhO 00:04:49.645 + ret=1 00:04:49.645 + echo '=== Start of file: /tmp/62.iiU ===' 00:04:49.645 + cat /tmp/62.iiU 00:04:49.645 + echo '=== End of file: /tmp/62.iiU ===' 00:04:49.645 + echo '' 00:04:49.645 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZhO ===' 00:04:49.645 + cat /tmp/spdk_tgt_config.json.ZhO 00:04:49.645 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZhO ===' 00:04:49.645 + echo '' 00:04:49.645 + rm /tmp/62.iiU /tmp/spdk_tgt_config.json.ZhO 00:04:49.645 + exit 1 00:04:49.645 INFO: configuration change detected. 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@317 -- # [[ -n 59501 ]] 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.645 07:08:58 json_config -- json_config/json_config.sh@323 -- # killprocess 59501 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@948 -- # '[' -z 59501 ']' 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@952 -- # kill -0 59501 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@953 -- # uname 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59501 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.645 killing process with pid 59501 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59501' 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@967 -- # kill 59501 00:04:49.645 07:08:58 json_config -- common/autotest_common.sh@972 -- # wait 59501 00:04:49.903 07:08:58 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.903 07:08:58 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:49.903 07:08:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.903 07:08:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.903 07:08:58 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:49.903 INFO: Success 00:04:49.903 07:08:58 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:49.903 00:04:49.903 real 0m8.517s 00:04:49.903 user 0m12.557s 00:04:49.903 sys 0m1.474s 00:04:49.903 07:08:58 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.903 07:08:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.903 ************************************ 00:04:49.903 END TEST json_config 00:04:49.903 ************************************ 00:04:49.903 07:08:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.903 07:08:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.903 07:08:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.903 07:08:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.903 07:08:58 -- common/autotest_common.sh@10 -- # set +x 00:04:49.903 ************************************ 00:04:49.903 START TEST json_config_extra_key 00:04:49.904 ************************************ 00:04:49.904 07:08:58 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.904 07:08:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.904 07:08:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.904 07:08:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.904 07:08:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.904 07:08:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.904 07:08:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.904 07:08:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:49.904 07:08:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:49.904 07:08:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.904 INFO: launching applications... 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:49.904 07:08:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59641 00:04:49.904 Waiting for target to run... 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59641 /var/tmp/spdk_tgt.sock 00:04:49.904 07:08:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.904 07:08:58 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59641 ']' 00:04:49.904 07:08:58 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.904 07:08:58 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.904 07:08:58 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.904 07:08:58 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.904 07:08:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.161 [2024-07-15 07:08:58.936107] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:50.161 [2024-07-15 07:08:58.936250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59641 ] 00:04:50.462 [2024-07-15 07:08:59.248458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.462 [2024-07-15 07:08:59.303624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.462 [2024-07-15 07:08:59.325500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:51.029 07:08:59 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.029 00:04:51.029 07:08:59 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.029 INFO: shutting down applications... 00:04:51.029 07:08:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.029 07:08:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59641 ]] 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59641 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59641 00:04:51.029 07:08:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.596 07:09:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.596 07:09:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.596 07:09:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59641 00:04:51.596 07:09:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.596 07:09:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.596 SPDK target shutdown done 00:04:51.596 Success 00:04:51.596 07:09:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.596 07:09:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.596 07:09:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.596 00:04:51.596 real 0m1.655s 00:04:51.596 user 0m1.547s 00:04:51.596 sys 0m0.331s 00:04:51.596 07:09:00 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.596 ************************************ 00:04:51.596 END TEST json_config_extra_key 00:04:51.596 ************************************ 00:04:51.596 07:09:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.596 07:09:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.596 07:09:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.596 07:09:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.596 07:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.596 07:09:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.596 ************************************ 00:04:51.596 START TEST alias_rpc 00:04:51.596 ************************************ 00:04:51.596 07:09:00 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.596 * Looking for test storage... 00:04:51.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:51.855 07:09:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.855 07:09:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59711 00:04:51.855 07:09:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.855 07:09:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59711 00:04:51.855 07:09:00 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59711 ']' 00:04:51.855 07:09:00 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.855 07:09:00 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.855 07:09:00 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.855 07:09:00 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.855 07:09:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.855 [2024-07-15 07:09:00.616741] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:51.855 [2024-07-15 07:09:00.616876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59711 ] 00:04:51.855 [2024-07-15 07:09:00.752524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.113 [2024-07-15 07:09:00.818808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.113 [2024-07-15 07:09:00.852528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:52.680 07:09:01 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.680 07:09:01 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:52.680 07:09:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.247 07:09:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59711 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59711 ']' 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59711 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59711 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.247 killing process with pid 59711 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59711' 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@967 -- # kill 59711 00:04:53.247 07:09:01 alias_rpc -- common/autotest_common.sh@972 -- # wait 59711 00:04:53.506 00:04:53.506 real 0m1.753s 00:04:53.506 user 0m2.139s 00:04:53.506 sys 0m0.347s 00:04:53.506 07:09:02 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.506 07:09:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.506 ************************************ 00:04:53.506 END TEST alias_rpc 00:04:53.506 ************************************ 00:04:53.506 07:09:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.506 07:09:02 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:53.506 07:09:02 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.506 07:09:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.506 07:09:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.506 07:09:02 -- common/autotest_common.sh@10 -- # set +x 00:04:53.506 ************************************ 00:04:53.506 START TEST spdkcli_tcp 00:04:53.506 ************************************ 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.506 * Looking for test storage... 00:04:53.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59782 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59782 00:04:53.506 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59782 ']' 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.506 07:09:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.506 [2024-07-15 07:09:02.437579] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:53.506 [2024-07-15 07:09:02.437678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59782 ] 00:04:53.765 [2024-07-15 07:09:02.577057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.765 [2024-07-15 07:09:02.650975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.765 [2024-07-15 07:09:02.650987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.765 [2024-07-15 07:09:02.686297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.023 07:09:02 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.023 07:09:02 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:54.023 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59791 00:04:54.023 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.023 07:09:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.283 [ 00:04:54.283 "bdev_malloc_delete", 00:04:54.283 "bdev_malloc_create", 00:04:54.283 "bdev_null_resize", 00:04:54.283 "bdev_null_delete", 00:04:54.283 "bdev_null_create", 00:04:54.283 "bdev_nvme_cuse_unregister", 00:04:54.283 "bdev_nvme_cuse_register", 00:04:54.283 "bdev_opal_new_user", 00:04:54.283 "bdev_opal_set_lock_state", 00:04:54.283 "bdev_opal_delete", 00:04:54.283 "bdev_opal_get_info", 00:04:54.283 "bdev_opal_create", 00:04:54.283 "bdev_nvme_opal_revert", 00:04:54.283 "bdev_nvme_opal_init", 00:04:54.283 "bdev_nvme_send_cmd", 00:04:54.283 "bdev_nvme_get_path_iostat", 00:04:54.283 "bdev_nvme_get_mdns_discovery_info", 00:04:54.283 "bdev_nvme_stop_mdns_discovery", 00:04:54.283 "bdev_nvme_start_mdns_discovery", 00:04:54.283 "bdev_nvme_set_multipath_policy", 00:04:54.283 "bdev_nvme_set_preferred_path", 00:04:54.283 "bdev_nvme_get_io_paths", 00:04:54.283 "bdev_nvme_remove_error_injection", 00:04:54.283 "bdev_nvme_add_error_injection", 00:04:54.283 "bdev_nvme_get_discovery_info", 00:04:54.283 "bdev_nvme_stop_discovery", 00:04:54.283 "bdev_nvme_start_discovery", 00:04:54.283 "bdev_nvme_get_controller_health_info", 00:04:54.283 "bdev_nvme_disable_controller", 00:04:54.283 "bdev_nvme_enable_controller", 00:04:54.283 "bdev_nvme_reset_controller", 00:04:54.283 "bdev_nvme_get_transport_statistics", 00:04:54.283 "bdev_nvme_apply_firmware", 00:04:54.283 "bdev_nvme_detach_controller", 00:04:54.283 "bdev_nvme_get_controllers", 00:04:54.283 "bdev_nvme_attach_controller", 00:04:54.283 "bdev_nvme_set_hotplug", 00:04:54.283 "bdev_nvme_set_options", 00:04:54.283 "bdev_passthru_delete", 00:04:54.283 "bdev_passthru_create", 00:04:54.283 "bdev_lvol_set_parent_bdev", 00:04:54.283 "bdev_lvol_set_parent", 00:04:54.283 "bdev_lvol_check_shallow_copy", 00:04:54.283 "bdev_lvol_start_shallow_copy", 00:04:54.283 "bdev_lvol_grow_lvstore", 00:04:54.283 "bdev_lvol_get_lvols", 00:04:54.283 "bdev_lvol_get_lvstores", 00:04:54.283 "bdev_lvol_delete", 00:04:54.283 "bdev_lvol_set_read_only", 00:04:54.283 "bdev_lvol_resize", 00:04:54.283 "bdev_lvol_decouple_parent", 00:04:54.283 "bdev_lvol_inflate", 00:04:54.283 "bdev_lvol_rename", 00:04:54.283 "bdev_lvol_clone_bdev", 00:04:54.283 "bdev_lvol_clone", 00:04:54.283 "bdev_lvol_snapshot", 00:04:54.283 "bdev_lvol_create", 00:04:54.283 "bdev_lvol_delete_lvstore", 00:04:54.283 "bdev_lvol_rename_lvstore", 00:04:54.283 "bdev_lvol_create_lvstore", 00:04:54.283 "bdev_raid_set_options", 00:04:54.283 "bdev_raid_remove_base_bdev", 00:04:54.283 "bdev_raid_add_base_bdev", 00:04:54.283 "bdev_raid_delete", 00:04:54.283 "bdev_raid_create", 00:04:54.283 "bdev_raid_get_bdevs", 00:04:54.283 "bdev_error_inject_error", 00:04:54.283 "bdev_error_delete", 00:04:54.283 "bdev_error_create", 00:04:54.283 "bdev_split_delete", 00:04:54.283 "bdev_split_create", 00:04:54.283 "bdev_delay_delete", 00:04:54.283 "bdev_delay_create", 00:04:54.283 "bdev_delay_update_latency", 00:04:54.283 "bdev_zone_block_delete", 00:04:54.283 "bdev_zone_block_create", 00:04:54.283 "blobfs_create", 00:04:54.283 "blobfs_detect", 00:04:54.283 "blobfs_set_cache_size", 00:04:54.283 "bdev_aio_delete", 00:04:54.283 "bdev_aio_rescan", 00:04:54.283 "bdev_aio_create", 00:04:54.283 "bdev_ftl_set_property", 00:04:54.283 "bdev_ftl_get_properties", 00:04:54.283 "bdev_ftl_get_stats", 00:04:54.283 "bdev_ftl_unmap", 00:04:54.283 "bdev_ftl_unload", 00:04:54.283 "bdev_ftl_delete", 00:04:54.283 "bdev_ftl_load", 00:04:54.283 "bdev_ftl_create", 00:04:54.283 "bdev_virtio_attach_controller", 00:04:54.283 "bdev_virtio_scsi_get_devices", 00:04:54.283 "bdev_virtio_detach_controller", 00:04:54.283 "bdev_virtio_blk_set_hotplug", 00:04:54.283 "bdev_iscsi_delete", 00:04:54.283 "bdev_iscsi_create", 00:04:54.283 "bdev_iscsi_set_options", 00:04:54.283 "bdev_uring_delete", 00:04:54.283 "bdev_uring_rescan", 00:04:54.283 "bdev_uring_create", 00:04:54.283 "accel_error_inject_error", 00:04:54.283 "ioat_scan_accel_module", 00:04:54.283 "dsa_scan_accel_module", 00:04:54.283 "iaa_scan_accel_module", 00:04:54.283 "keyring_file_remove_key", 00:04:54.283 "keyring_file_add_key", 00:04:54.283 "keyring_linux_set_options", 00:04:54.283 "iscsi_get_histogram", 00:04:54.283 "iscsi_enable_histogram", 00:04:54.283 "iscsi_set_options", 00:04:54.283 "iscsi_get_auth_groups", 00:04:54.283 "iscsi_auth_group_remove_secret", 00:04:54.283 "iscsi_auth_group_add_secret", 00:04:54.283 "iscsi_delete_auth_group", 00:04:54.283 "iscsi_create_auth_group", 00:04:54.283 "iscsi_set_discovery_auth", 00:04:54.283 "iscsi_get_options", 00:04:54.283 "iscsi_target_node_request_logout", 00:04:54.283 "iscsi_target_node_set_redirect", 00:04:54.283 "iscsi_target_node_set_auth", 00:04:54.283 "iscsi_target_node_add_lun", 00:04:54.283 "iscsi_get_stats", 00:04:54.283 "iscsi_get_connections", 00:04:54.283 "iscsi_portal_group_set_auth", 00:04:54.284 "iscsi_start_portal_group", 00:04:54.284 "iscsi_delete_portal_group", 00:04:54.284 "iscsi_create_portal_group", 00:04:54.284 "iscsi_get_portal_groups", 00:04:54.284 "iscsi_delete_target_node", 00:04:54.284 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.284 "iscsi_target_node_add_pg_ig_maps", 00:04:54.284 "iscsi_create_target_node", 00:04:54.284 "iscsi_get_target_nodes", 00:04:54.284 "iscsi_delete_initiator_group", 00:04:54.284 "iscsi_initiator_group_remove_initiators", 00:04:54.284 "iscsi_initiator_group_add_initiators", 00:04:54.284 "iscsi_create_initiator_group", 00:04:54.284 "iscsi_get_initiator_groups", 00:04:54.284 "nvmf_set_crdt", 00:04:54.284 "nvmf_set_config", 00:04:54.284 "nvmf_set_max_subsystems", 00:04:54.284 "nvmf_stop_mdns_prr", 00:04:54.284 "nvmf_publish_mdns_prr", 00:04:54.284 "nvmf_subsystem_get_listeners", 00:04:54.284 "nvmf_subsystem_get_qpairs", 00:04:54.284 "nvmf_subsystem_get_controllers", 00:04:54.284 "nvmf_get_stats", 00:04:54.284 "nvmf_get_transports", 00:04:54.284 "nvmf_create_transport", 00:04:54.284 "nvmf_get_targets", 00:04:54.284 "nvmf_delete_target", 00:04:54.284 "nvmf_create_target", 00:04:54.284 "nvmf_subsystem_allow_any_host", 00:04:54.284 "nvmf_subsystem_remove_host", 00:04:54.284 "nvmf_subsystem_add_host", 00:04:54.284 "nvmf_ns_remove_host", 00:04:54.284 "nvmf_ns_add_host", 00:04:54.284 "nvmf_subsystem_remove_ns", 00:04:54.284 "nvmf_subsystem_add_ns", 00:04:54.284 "nvmf_subsystem_listener_set_ana_state", 00:04:54.284 "nvmf_discovery_get_referrals", 00:04:54.284 "nvmf_discovery_remove_referral", 00:04:54.284 "nvmf_discovery_add_referral", 00:04:54.284 "nvmf_subsystem_remove_listener", 00:04:54.284 "nvmf_subsystem_add_listener", 00:04:54.284 "nvmf_delete_subsystem", 00:04:54.284 "nvmf_create_subsystem", 00:04:54.284 "nvmf_get_subsystems", 00:04:54.284 "env_dpdk_get_mem_stats", 00:04:54.284 "nbd_get_disks", 00:04:54.284 "nbd_stop_disk", 00:04:54.284 "nbd_start_disk", 00:04:54.284 "ublk_recover_disk", 00:04:54.284 "ublk_get_disks", 00:04:54.284 "ublk_stop_disk", 00:04:54.284 "ublk_start_disk", 00:04:54.284 "ublk_destroy_target", 00:04:54.284 "ublk_create_target", 00:04:54.284 "virtio_blk_create_transport", 00:04:54.284 "virtio_blk_get_transports", 00:04:54.284 "vhost_controller_set_coalescing", 00:04:54.284 "vhost_get_controllers", 00:04:54.284 "vhost_delete_controller", 00:04:54.284 "vhost_create_blk_controller", 00:04:54.284 "vhost_scsi_controller_remove_target", 00:04:54.284 "vhost_scsi_controller_add_target", 00:04:54.284 "vhost_start_scsi_controller", 00:04:54.284 "vhost_create_scsi_controller", 00:04:54.284 "thread_set_cpumask", 00:04:54.284 "framework_get_governor", 00:04:54.284 "framework_get_scheduler", 00:04:54.284 "framework_set_scheduler", 00:04:54.284 "framework_get_reactors", 00:04:54.284 "thread_get_io_channels", 00:04:54.284 "thread_get_pollers", 00:04:54.284 "thread_get_stats", 00:04:54.284 "framework_monitor_context_switch", 00:04:54.284 "spdk_kill_instance", 00:04:54.284 "log_enable_timestamps", 00:04:54.284 "log_get_flags", 00:04:54.284 "log_clear_flag", 00:04:54.284 "log_set_flag", 00:04:54.284 "log_get_level", 00:04:54.284 "log_set_level", 00:04:54.284 "log_get_print_level", 00:04:54.284 "log_set_print_level", 00:04:54.284 "framework_enable_cpumask_locks", 00:04:54.284 "framework_disable_cpumask_locks", 00:04:54.284 "framework_wait_init", 00:04:54.284 "framework_start_init", 00:04:54.284 "scsi_get_devices", 00:04:54.284 "bdev_get_histogram", 00:04:54.284 "bdev_enable_histogram", 00:04:54.284 "bdev_set_qos_limit", 00:04:54.284 "bdev_set_qd_sampling_period", 00:04:54.284 "bdev_get_bdevs", 00:04:54.284 "bdev_reset_iostat", 00:04:54.284 "bdev_get_iostat", 00:04:54.284 "bdev_examine", 00:04:54.284 "bdev_wait_for_examine", 00:04:54.284 "bdev_set_options", 00:04:54.284 "notify_get_notifications", 00:04:54.284 "notify_get_types", 00:04:54.284 "accel_get_stats", 00:04:54.284 "accel_set_options", 00:04:54.284 "accel_set_driver", 00:04:54.284 "accel_crypto_key_destroy", 00:04:54.284 "accel_crypto_keys_get", 00:04:54.284 "accel_crypto_key_create", 00:04:54.284 "accel_assign_opc", 00:04:54.284 "accel_get_module_info", 00:04:54.284 "accel_get_opc_assignments", 00:04:54.284 "vmd_rescan", 00:04:54.284 "vmd_remove_device", 00:04:54.284 "vmd_enable", 00:04:54.284 "sock_get_default_impl", 00:04:54.284 "sock_set_default_impl", 00:04:54.284 "sock_impl_set_options", 00:04:54.284 "sock_impl_get_options", 00:04:54.284 "iobuf_get_stats", 00:04:54.284 "iobuf_set_options", 00:04:54.284 "framework_get_pci_devices", 00:04:54.284 "framework_get_config", 00:04:54.284 "framework_get_subsystems", 00:04:54.284 "trace_get_info", 00:04:54.284 "trace_get_tpoint_group_mask", 00:04:54.284 "trace_disable_tpoint_group", 00:04:54.284 "trace_enable_tpoint_group", 00:04:54.284 "trace_clear_tpoint_mask", 00:04:54.284 "trace_set_tpoint_mask", 00:04:54.284 "keyring_get_keys", 00:04:54.284 "spdk_get_version", 00:04:54.284 "rpc_get_methods" 00:04:54.284 ] 00:04:54.284 07:09:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.284 07:09:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.284 07:09:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59782 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59782 ']' 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59782 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59782 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.284 killing process with pid 59782 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59782' 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59782 00:04:54.284 07:09:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59782 00:04:54.542 00:04:54.542 real 0m1.191s 00:04:54.542 user 0m2.104s 00:04:54.542 sys 0m0.373s 00:04:54.542 07:09:03 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.542 07:09:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.542 ************************************ 00:04:54.542 END TEST spdkcli_tcp 00:04:54.542 ************************************ 00:04:54.801 07:09:03 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.801 07:09:03 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.801 07:09:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.801 07:09:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.801 07:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:54.801 ************************************ 00:04:54.801 START TEST dpdk_mem_utility 00:04:54.801 ************************************ 00:04:54.801 07:09:03 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.801 * Looking for test storage... 00:04:54.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:54.801 07:09:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:54.801 07:09:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59865 00:04:54.801 07:09:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59865 00:04:54.801 07:09:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.801 07:09:03 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59865 ']' 00:04:54.801 07:09:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.801 07:09:03 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.801 07:09:03 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.801 07:09:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.801 07:09:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.801 [2024-07-15 07:09:03.671336] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:54.801 [2024-07-15 07:09:03.671435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59865 ] 00:04:55.060 [2024-07-15 07:09:03.810166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.060 [2024-07-15 07:09:03.881963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.060 [2024-07-15 07:09:03.916649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:55.322 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.322 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:55.322 07:09:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:55.322 07:09:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:55.322 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.322 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.322 { 00:04:55.322 "filename": "/tmp/spdk_mem_dump.txt" 00:04:55.322 } 00:04:55.322 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.322 07:09:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.322 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:55.322 1 heaps totaling size 814.000000 MiB 00:04:55.322 size: 814.000000 MiB heap id: 0 00:04:55.322 end heaps---------- 00:04:55.322 8 mempools totaling size 598.116089 MiB 00:04:55.322 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:55.322 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:55.322 size: 84.521057 MiB name: bdev_io_59865 00:04:55.322 size: 51.011292 MiB name: evtpool_59865 00:04:55.322 size: 50.003479 MiB name: msgpool_59865 00:04:55.322 size: 21.763794 MiB name: PDU_Pool 00:04:55.322 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:55.322 size: 0.026123 MiB name: Session_Pool 00:04:55.322 end mempools------- 00:04:55.322 6 memzones totaling size 4.142822 MiB 00:04:55.322 size: 1.000366 MiB name: RG_ring_0_59865 00:04:55.322 size: 1.000366 MiB name: RG_ring_1_59865 00:04:55.322 size: 1.000366 MiB name: RG_ring_4_59865 00:04:55.322 size: 1.000366 MiB name: RG_ring_5_59865 00:04:55.322 size: 0.125366 MiB name: RG_ring_2_59865 00:04:55.322 size: 0.015991 MiB name: RG_ring_3_59865 00:04:55.322 end memzones------- 00:04:55.322 07:09:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:55.322 heap id: 0 total size: 814.000000 MiB number of busy elements: 295 number of free elements: 15 00:04:55.322 list of free elements. size: 12.472839 MiB 00:04:55.322 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:55.322 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:55.322 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:55.322 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:55.322 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:55.322 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:55.322 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:55.322 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:55.322 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:55.322 element at address: 0x20001aa00000 with size: 0.570068 MiB 00:04:55.322 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:55.322 element at address: 0x200000800000 with size: 0.486328 MiB 00:04:55.322 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:55.322 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:55.322 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:55.322 list of standard malloc elements. size: 199.264587 MiB 00:04:55.322 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:55.322 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:55.322 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:55.322 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:55.322 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:55.322 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:55.322 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:55.322 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:55.322 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:55.322 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:55.322 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:55.322 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:55.323 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:55.323 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:55.324 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:55.324 list of memzone associated elements. size: 602.262573 MiB 00:04:55.324 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:55.324 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:55.324 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:55.324 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:55.324 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:55.324 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59865_0 00:04:55.324 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:55.324 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59865_0 00:04:55.324 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:55.324 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59865_0 00:04:55.324 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:55.324 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:55.324 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:55.324 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:55.324 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:55.324 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59865 00:04:55.324 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:55.324 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59865 00:04:55.324 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:55.324 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59865 00:04:55.324 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:55.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:55.324 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:55.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:55.324 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:55.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:55.324 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:55.324 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:55.324 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:55.324 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59865 00:04:55.324 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:55.324 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59865 00:04:55.324 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:55.324 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59865 00:04:55.324 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:55.324 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59865 00:04:55.324 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:55.324 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59865 00:04:55.324 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:55.324 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:55.324 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:55.324 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:55.324 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:55.324 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:55.324 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:55.324 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59865 00:04:55.324 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:55.324 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:55.324 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:55.324 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:55.324 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:55.324 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59865 00:04:55.324 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:55.324 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:55.324 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:55.324 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59865 00:04:55.324 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:55.324 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59865 00:04:55.324 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:55.324 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:55.324 07:09:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:55.324 07:09:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59865 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59865 ']' 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59865 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59865 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.324 killing process with pid 59865 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59865' 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59865 00:04:55.324 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59865 00:04:55.584 00:04:55.584 real 0m0.995s 00:04:55.584 user 0m1.063s 00:04:55.584 sys 0m0.291s 00:04:55.584 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.584 07:09:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.584 ************************************ 00:04:55.584 END TEST dpdk_mem_utility 00:04:55.584 ************************************ 00:04:55.843 07:09:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.843 07:09:04 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:55.843 07:09:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.843 07:09:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.843 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:55.843 ************************************ 00:04:55.843 START TEST event 00:04:55.843 ************************************ 00:04:55.843 07:09:04 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:55.843 * Looking for test storage... 00:04:55.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:55.843 07:09:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:55.843 07:09:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:55.843 07:09:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.843 07:09:04 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:55.843 07:09:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.843 07:09:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.843 ************************************ 00:04:55.843 START TEST event_perf 00:04:55.843 ************************************ 00:04:55.843 07:09:04 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.843 Running I/O for 1 seconds...[2024-07-15 07:09:04.674271] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:55.843 [2024-07-15 07:09:04.674367] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59929 ] 00:04:56.102 [2024-07-15 07:09:04.813806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.102 [2024-07-15 07:09:04.888543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.102 [2024-07-15 07:09:04.888691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.102 Running I/O for 1 seconds...[2024-07-15 07:09:04.889598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.102 [2024-07-15 07:09:04.889607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.038 00:04:57.038 lcore 0: 192775 00:04:57.038 lcore 1: 192775 00:04:57.038 lcore 2: 192775 00:04:57.038 lcore 3: 192776 00:04:57.038 done. 00:04:57.038 00:04:57.038 real 0m1.306s 00:04:57.038 user 0m4.131s 00:04:57.038 sys 0m0.053s 00:04:57.038 07:09:05 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.038 ************************************ 00:04:57.038 END TEST event_perf 00:04:57.038 ************************************ 00:04:57.038 07:09:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.297 07:09:06 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.297 07:09:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:57.297 07:09:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:57.297 07:09:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.297 07:09:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.297 ************************************ 00:04:57.297 START TEST event_reactor 00:04:57.297 ************************************ 00:04:57.297 07:09:06 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:57.297 [2024-07-15 07:09:06.036548] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:57.297 [2024-07-15 07:09:06.036688] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59962 ] 00:04:57.297 [2024-07-15 07:09:06.177963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.297 [2024-07-15 07:09:06.242296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.675 test_start 00:04:58.675 oneshot 00:04:58.675 tick 100 00:04:58.675 tick 100 00:04:58.675 tick 250 00:04:58.675 tick 100 00:04:58.675 tick 100 00:04:58.675 tick 100 00:04:58.675 tick 250 00:04:58.675 tick 500 00:04:58.675 tick 100 00:04:58.675 tick 100 00:04:58.675 tick 250 00:04:58.675 tick 100 00:04:58.675 tick 100 00:04:58.675 test_end 00:04:58.675 00:04:58.675 real 0m1.294s 00:04:58.675 user 0m1.144s 00:04:58.675 sys 0m0.043s 00:04:58.675 07:09:07 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.675 ************************************ 00:04:58.675 END TEST event_reactor 00:04:58.675 07:09:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:58.675 ************************************ 00:04:58.675 07:09:07 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.675 07:09:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.675 07:09:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:58.675 07:09:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.675 07:09:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.675 ************************************ 00:04:58.675 START TEST event_reactor_perf 00:04:58.675 ************************************ 00:04:58.675 07:09:07 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.675 [2024-07-15 07:09:07.385889] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:04:58.675 [2024-07-15 07:09:07.385997] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59998 ] 00:04:58.675 [2024-07-15 07:09:07.524416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.675 [2024-07-15 07:09:07.583301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.076 test_start 00:05:00.076 test_end 00:05:00.076 Performance: 379291 events per second 00:05:00.076 00:05:00.076 real 0m1.291s 00:05:00.076 user 0m1.150s 00:05:00.076 sys 0m0.035s 00:05:00.076 07:09:08 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.076 07:09:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.076 ************************************ 00:05:00.076 END TEST event_reactor_perf 00:05:00.076 ************************************ 00:05:00.076 07:09:08 event -- common/autotest_common.sh@1142 -- # return 0 00:05:00.076 07:09:08 event -- event/event.sh@49 -- # uname -s 00:05:00.076 07:09:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:00.076 07:09:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:00.076 07:09:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.076 07:09:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.076 07:09:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.076 ************************************ 00:05:00.076 START TEST event_scheduler 00:05:00.076 ************************************ 00:05:00.076 07:09:08 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:00.076 * Looking for test storage... 00:05:00.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:00.076 07:09:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.076 07:09:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60059 00:05:00.076 07:09:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.076 07:09:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.076 07:09:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60059 00:05:00.076 07:09:08 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60059 ']' 00:05:00.076 07:09:08 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.076 07:09:08 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.076 07:09:08 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.076 07:09:08 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.076 07:09:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.076 [2024-07-15 07:09:08.879752] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:00.076 [2024-07-15 07:09:08.879888] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:05:00.335 [2024-07-15 07:09:09.036536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.335 [2024-07-15 07:09:09.102261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.335 [2024-07-15 07:09:09.102442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.335 [2024-07-15 07:09:09.102550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.335 [2024-07-15 07:09:09.102846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.902 07:09:09 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.902 07:09:09 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:00.902 07:09:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:00.902 07:09:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.902 07:09:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.902 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.902 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.902 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.902 POWER: Cannot set governor of lcore 0 to performance 00:05:00.902 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.902 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.902 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.902 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.902 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:00.902 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:00.902 POWER: Unable to set Power Management Environment for lcore 0 00:05:00.902 [2024-07-15 07:09:09.847978] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:00.903 [2024-07-15 07:09:09.847992] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:00.903 [2024-07-15 07:09:09.848000] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:00.903 [2024-07-15 07:09:09.848013] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:00.903 [2024-07-15 07:09:09.848020] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:00.903 [2024-07-15 07:09:09.848028] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:00.903 07:09:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.903 07:09:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:00.903 07:09:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.903 07:09:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 [2024-07-15 07:09:09.884666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:01.162 [2024-07-15 07:09:09.903188] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:01.162 07:09:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:01.162 07:09:09 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.162 07:09:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 ************************************ 00:05:01.162 START TEST scheduler_create_thread 00:05:01.162 ************************************ 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 2 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 3 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 4 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 5 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 6 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 7 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 8 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 9 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 10 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.162 07:09:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.538 07:09:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.538 07:09:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:02.538 07:09:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:02.538 07:09:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.538 07:09:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.911 07:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.911 00:05:03.911 real 0m2.616s 00:05:03.911 user 0m0.017s 00:05:03.911 sys 0m0.007s 00:05:03.911 07:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.911 07:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.911 ************************************ 00:05:03.911 END TEST scheduler_create_thread 00:05:03.911 ************************************ 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:03.911 07:09:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.911 07:09:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60059 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60059 ']' 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60059 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60059 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:03.911 killing process with pid 60059 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60059' 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60059 00:05:03.911 07:09:12 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60059 00:05:04.171 [2024-07-15 07:09:13.011013] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:04.430 00:05:04.430 real 0m4.464s 00:05:04.430 user 0m8.544s 00:05:04.430 sys 0m0.318s 00:05:04.430 07:09:13 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.430 07:09:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.430 ************************************ 00:05:04.430 END TEST event_scheduler 00:05:04.430 ************************************ 00:05:04.430 07:09:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.430 07:09:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:04.430 07:09:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:04.430 07:09:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.430 07:09:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.430 07:09:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.430 ************************************ 00:05:04.430 START TEST app_repeat 00:05:04.430 ************************************ 00:05:04.430 07:09:13 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60153 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.430 Process app_repeat pid: 60153 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60153' 00:05:04.430 07:09:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.430 spdk_app_start Round 0 00:05:04.431 07:09:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:04.431 07:09:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60153 /var/tmp/spdk-nbd.sock 00:05:04.431 07:09:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60153 ']' 00:05:04.431 07:09:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.431 07:09:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.431 07:09:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.431 07:09:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.431 07:09:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.431 [2024-07-15 07:09:13.277067] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:04.431 [2024-07-15 07:09:13.277186] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60153 ] 00:05:04.689 [2024-07-15 07:09:13.411866] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.689 [2024-07-15 07:09:13.475967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.689 [2024-07-15 07:09:13.475979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.689 [2024-07-15 07:09:13.505888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:04.689 07:09:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.689 07:09:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.689 07:09:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.946 Malloc0 00:05:04.946 07:09:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.204 Malloc1 00:05:05.462 07:09:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.462 07:09:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.721 /dev/nbd0 00:05:05.721 07:09:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.721 07:09:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.721 1+0 records in 00:05:05.721 1+0 records out 00:05:05.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279344 s, 14.7 MB/s 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.721 07:09:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.721 07:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.721 07:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.721 07:09:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.980 /dev/nbd1 00:05:05.980 07:09:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.980 07:09:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.980 1+0 records in 00:05:05.980 1+0 records out 00:05:05.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324484 s, 12.6 MB/s 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.980 07:09:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.980 07:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.980 07:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.980 07:09:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.980 07:09:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.980 07:09:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.239 { 00:05:06.239 "nbd_device": "/dev/nbd0", 00:05:06.239 "bdev_name": "Malloc0" 00:05:06.239 }, 00:05:06.239 { 00:05:06.239 "nbd_device": "/dev/nbd1", 00:05:06.239 "bdev_name": "Malloc1" 00:05:06.239 } 00:05:06.239 ]' 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.239 { 00:05:06.239 "nbd_device": "/dev/nbd0", 00:05:06.239 "bdev_name": "Malloc0" 00:05:06.239 }, 00:05:06.239 { 00:05:06.239 "nbd_device": "/dev/nbd1", 00:05:06.239 "bdev_name": "Malloc1" 00:05:06.239 } 00:05:06.239 ]' 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.239 /dev/nbd1' 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.239 /dev/nbd1' 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.239 256+0 records in 00:05:06.239 256+0 records out 00:05:06.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720871 s, 145 MB/s 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.239 07:09:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.239 256+0 records in 00:05:06.239 256+0 records out 00:05:06.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265998 s, 39.4 MB/s 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.240 256+0 records in 00:05:06.240 256+0 records out 00:05:06.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268876 s, 39.0 MB/s 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.240 07:09:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.500 07:09:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.769 07:09:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.028 07:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.287 07:09:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.287 07:09:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.546 07:09:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.546 [2024-07-15 07:09:16.449364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.805 [2024-07-15 07:09:16.506597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.805 [2024-07-15 07:09:16.506608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.805 [2024-07-15 07:09:16.537688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.805 [2024-07-15 07:09:16.537789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.805 [2024-07-15 07:09:16.537801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.087 07:09:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.087 spdk_app_start Round 1 00:05:11.087 07:09:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:11.087 07:09:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60153 /var/tmp/spdk-nbd.sock 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60153 ']' 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.087 07:09:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:11.087 07:09:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.087 Malloc0 00:05:11.087 07:09:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.345 Malloc1 00:05:11.345 07:09:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.345 07:09:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.603 /dev/nbd0 00:05:11.603 07:09:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.603 07:09:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.603 07:09:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:11.603 07:09:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.603 07:09:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.603 07:09:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.603 07:09:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:11.603 07:09:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.604 1+0 records in 00:05:11.604 1+0 records out 00:05:11.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386108 s, 10.6 MB/s 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.604 07:09:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.604 07:09:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.604 07:09:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.604 07:09:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.861 /dev/nbd1 00:05:11.861 07:09:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.861 07:09:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.861 1+0 records in 00:05:11.861 1+0 records out 00:05:11.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657188 s, 6.2 MB/s 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.861 07:09:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.861 07:09:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.861 07:09:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.861 07:09:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.861 07:09:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.861 07:09:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.119 07:09:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.119 { 00:05:12.119 "nbd_device": "/dev/nbd0", 00:05:12.119 "bdev_name": "Malloc0" 00:05:12.119 }, 00:05:12.119 { 00:05:12.119 "nbd_device": "/dev/nbd1", 00:05:12.119 "bdev_name": "Malloc1" 00:05:12.119 } 00:05:12.119 ]' 00:05:12.119 07:09:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.119 { 00:05:12.119 "nbd_device": "/dev/nbd0", 00:05:12.119 "bdev_name": "Malloc0" 00:05:12.119 }, 00:05:12.119 { 00:05:12.119 "nbd_device": "/dev/nbd1", 00:05:12.119 "bdev_name": "Malloc1" 00:05:12.119 } 00:05:12.119 ]' 00:05:12.119 07:09:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.119 /dev/nbd1' 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.119 /dev/nbd1' 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.119 256+0 records in 00:05:12.119 256+0 records out 00:05:12.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107068 s, 97.9 MB/s 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.119 07:09:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.386 256+0 records in 00:05:12.386 256+0 records out 00:05:12.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027987 s, 37.5 MB/s 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.386 256+0 records in 00:05:12.386 256+0 records out 00:05:12.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297887 s, 35.2 MB/s 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.386 07:09:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.645 07:09:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.903 07:09:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.162 07:09:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.162 07:09:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.421 07:09:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.679 [2024-07-15 07:09:22.387726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.679 [2024-07-15 07:09:22.447939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.679 [2024-07-15 07:09:22.447948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.679 [2024-07-15 07:09:22.479713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.679 [2024-07-15 07:09:22.479833] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.679 [2024-07-15 07:09:22.479846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.964 spdk_app_start Round 2 00:05:16.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.964 07:09:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.964 07:09:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:16.964 07:09:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60153 /var/tmp/spdk-nbd.sock 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60153 ']' 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.964 07:09:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.964 07:09:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.964 Malloc0 00:05:16.964 07:09:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.223 Malloc1 00:05:17.223 07:09:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.223 07:09:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.482 /dev/nbd0 00:05:17.482 07:09:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.482 07:09:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.482 1+0 records in 00:05:17.482 1+0 records out 00:05:17.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372481 s, 11.0 MB/s 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.482 07:09:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.482 07:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.482 07:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.482 07:09:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.740 /dev/nbd1 00:05:17.740 07:09:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.740 07:09:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.740 1+0 records in 00:05:17.740 1+0 records out 00:05:17.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215855 s, 19.0 MB/s 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.740 07:09:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.740 07:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.740 07:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.740 07:09:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.741 07:09:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.741 07:09:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.999 07:09:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.999 { 00:05:17.999 "nbd_device": "/dev/nbd0", 00:05:17.999 "bdev_name": "Malloc0" 00:05:17.999 }, 00:05:17.999 { 00:05:17.999 "nbd_device": "/dev/nbd1", 00:05:17.999 "bdev_name": "Malloc1" 00:05:17.999 } 00:05:17.999 ]' 00:05:17.999 07:09:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.999 { 00:05:17.999 "nbd_device": "/dev/nbd0", 00:05:17.999 "bdev_name": "Malloc0" 00:05:17.999 }, 00:05:17.999 { 00:05:17.999 "nbd_device": "/dev/nbd1", 00:05:17.999 "bdev_name": "Malloc1" 00:05:17.999 } 00:05:17.999 ]' 00:05:17.999 07:09:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.999 07:09:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.999 /dev/nbd1' 00:05:17.999 07:09:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.999 /dev/nbd1' 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.000 07:09:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.258 256+0 records in 00:05:18.258 256+0 records out 00:05:18.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105405 s, 99.5 MB/s 00:05:18.258 07:09:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.258 07:09:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.258 256+0 records in 00:05:18.258 256+0 records out 00:05:18.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234828 s, 44.7 MB/s 00:05:18.258 07:09:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.258 07:09:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.258 256+0 records in 00:05:18.258 256+0 records out 00:05:18.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250524 s, 41.9 MB/s 00:05:18.258 07:09:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.258 07:09:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.258 07:09:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.258 07:09:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.259 07:09:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.520 07:09:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.779 07:09:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.038 07:09:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.038 07:09:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.344 07:09:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.344 [2024-07-15 07:09:28.289145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.603 [2024-07-15 07:09:28.346955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.603 [2024-07-15 07:09:28.346967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.603 [2024-07-15 07:09:28.377741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.603 [2024-07-15 07:09:28.377842] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.603 [2024-07-15 07:09:28.377855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.889 07:09:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60153 /var/tmp/spdk-nbd.sock 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60153 ']' 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:22.889 07:09:31 event.app_repeat -- event/event.sh@39 -- # killprocess 60153 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60153 ']' 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60153 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60153 00:05:22.889 killing process with pid 60153 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60153' 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60153 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60153 00:05:22.889 spdk_app_start is called in Round 0. 00:05:22.889 Shutdown signal received, stop current app iteration 00:05:22.889 Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 reinitialization... 00:05:22.889 spdk_app_start is called in Round 1. 00:05:22.889 Shutdown signal received, stop current app iteration 00:05:22.889 Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 reinitialization... 00:05:22.889 spdk_app_start is called in Round 2. 00:05:22.889 Shutdown signal received, stop current app iteration 00:05:22.889 Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 reinitialization... 00:05:22.889 spdk_app_start is called in Round 3. 00:05:22.889 Shutdown signal received, stop current app iteration 00:05:22.889 07:09:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.889 07:09:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:22.889 00:05:22.889 real 0m18.389s 00:05:22.889 user 0m41.714s 00:05:22.889 sys 0m2.687s 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.889 07:09:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.889 ************************************ 00:05:22.889 END TEST app_repeat 00:05:22.889 ************************************ 00:05:22.889 07:09:31 event -- common/autotest_common.sh@1142 -- # return 0 00:05:22.889 07:09:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:22.889 07:09:31 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:22.889 07:09:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.889 07:09:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.889 07:09:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.889 ************************************ 00:05:22.889 START TEST cpu_locks 00:05:22.889 ************************************ 00:05:22.889 07:09:31 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:22.889 * Looking for test storage... 00:05:22.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:22.889 07:09:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:22.889 07:09:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:22.889 07:09:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:22.889 07:09:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:22.889 07:09:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.889 07:09:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.889 07:09:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.889 ************************************ 00:05:22.889 START TEST default_locks 00:05:22.889 ************************************ 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60584 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60584 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60584 ']' 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.889 07:09:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.148 [2024-07-15 07:09:31.843949] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:23.148 [2024-07-15 07:09:31.844067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:05:23.148 [2024-07-15 07:09:31.985387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.148 [2024-07-15 07:09:32.040356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.148 [2024-07-15 07:09:32.069179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:24.083 07:09:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.083 07:09:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:24.083 07:09:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60584 00:05:24.083 07:09:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60584 00:05:24.083 07:09:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60584 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60584 ']' 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60584 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60584 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.342 killing process with pid 60584 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60584' 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60584 00:05:24.342 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60584 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60584 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60584 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60584 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60584 ']' 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.600 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60584) - No such process 00:05:24.600 ERROR: process (pid: 60584) is no longer running 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.600 00:05:24.600 real 0m1.742s 00:05:24.600 user 0m2.010s 00:05:24.600 sys 0m0.422s 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.600 ************************************ 00:05:24.600 07:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.600 END TEST default_locks 00:05:24.600 ************************************ 00:05:24.859 07:09:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.859 07:09:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:24.859 07:09:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.859 07:09:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.859 07:09:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 ************************************ 00:05:24.859 START TEST default_locks_via_rpc 00:05:24.859 ************************************ 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60625 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60625 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60625 ']' 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.859 07:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.859 [2024-07-15 07:09:33.637855] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:24.859 [2024-07-15 07:09:33.638001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:05:24.859 [2024-07-15 07:09:33.776329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.118 [2024-07-15 07:09:33.839843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.118 [2024-07-15 07:09:33.872840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60625 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60625 00:05:26.054 07:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60625 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60625 ']' 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60625 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60625 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.313 killing process with pid 60625 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60625' 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60625 00:05:26.313 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60625 00:05:26.573 00:05:26.573 real 0m1.828s 00:05:26.573 user 0m2.126s 00:05:26.573 sys 0m0.480s 00:05:26.573 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.573 07:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.573 ************************************ 00:05:26.573 END TEST default_locks_via_rpc 00:05:26.573 ************************************ 00:05:26.573 07:09:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:26.573 07:09:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.573 07:09:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.573 07:09:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.573 07:09:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.573 ************************************ 00:05:26.573 START TEST non_locking_app_on_locked_coremask 00:05:26.573 ************************************ 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60676 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60676 /var/tmp/spdk.sock 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60676 ']' 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.573 07:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.573 [2024-07-15 07:09:35.522114] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:26.573 [2024-07-15 07:09:35.522247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:05:26.832 [2024-07-15 07:09:35.656311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.832 [2024-07-15 07:09:35.712258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.832 [2024-07-15 07:09:35.742082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60692 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60692 /var/tmp/spdk2.sock 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60692 ']' 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.767 07:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.767 [2024-07-15 07:09:36.560288] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:27.767 [2024-07-15 07:09:36.560799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60692 ] 00:05:27.767 [2024-07-15 07:09:36.705425] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.767 [2024-07-15 07:09:36.705494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.026 [2024-07-15 07:09:36.827476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.026 [2024-07-15 07:09:36.892389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:28.962 07:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.962 07:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:28.962 07:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60676 00:05:28.962 07:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60676 00:05:28.962 07:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60676 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60676 ']' 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60676 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60676 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.528 killing process with pid 60676 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60676' 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60676 00:05:29.528 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60676 00:05:30.095 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60692 00:05:30.095 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60692 ']' 00:05:30.095 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60692 00:05:30.095 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:30.095 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.095 07:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60692 00:05:30.095 07:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.095 07:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.095 killing process with pid 60692 00:05:30.095 07:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60692' 00:05:30.095 07:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60692 00:05:30.095 07:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60692 00:05:30.354 00:05:30.354 real 0m3.823s 00:05:30.355 user 0m4.552s 00:05:30.355 sys 0m0.952s 00:05:30.355 07:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.355 07:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.355 ************************************ 00:05:30.355 END TEST non_locking_app_on_locked_coremask 00:05:30.355 ************************************ 00:05:30.614 07:09:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:30.614 07:09:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.614 07:09:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.614 07:09:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.614 07:09:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.614 ************************************ 00:05:30.614 START TEST locking_app_on_unlocked_coremask 00:05:30.614 ************************************ 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60759 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60759 /var/tmp/spdk.sock 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60759 ']' 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.614 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.614 [2024-07-15 07:09:39.389419] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:30.614 [2024-07-15 07:09:39.389536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60759 ] 00:05:30.614 [2024-07-15 07:09:39.520567] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.614 [2024-07-15 07:09:39.520627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.873 [2024-07-15 07:09:39.582529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.873 [2024-07-15 07:09:39.612666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60762 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60762 /var/tmp/spdk2.sock 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60762 ']' 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.873 07:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.873 [2024-07-15 07:09:39.807499] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:30.873 [2024-07-15 07:09:39.807591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60762 ] 00:05:31.165 [2024-07-15 07:09:39.951529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.165 [2024-07-15 07:09:40.076511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.424 [2024-07-15 07:09:40.142656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.990 07:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.991 07:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.991 07:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60762 00:05:31.991 07:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.991 07:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60762 00:05:32.926 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60759 00:05:32.926 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60759 ']' 00:05:32.926 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60759 00:05:32.926 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:32.926 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.926 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60759 00:05:32.927 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.927 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.927 killing process with pid 60759 00:05:32.927 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60759' 00:05:32.927 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60759 00:05:32.927 07:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60759 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60762 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60762 ']' 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60762 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60762 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60762' 00:05:33.494 killing process with pid 60762 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60762 00:05:33.494 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60762 00:05:33.753 00:05:33.753 real 0m3.161s 00:05:33.753 user 0m3.705s 00:05:33.753 sys 0m0.898s 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.753 ************************************ 00:05:33.753 END TEST locking_app_on_unlocked_coremask 00:05:33.753 ************************************ 00:05:33.753 07:09:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:33.753 07:09:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:33.753 07:09:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.753 07:09:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.753 07:09:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.753 ************************************ 00:05:33.753 START TEST locking_app_on_locked_coremask 00:05:33.753 ************************************ 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60829 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60829 /var/tmp/spdk.sock 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60829 ']' 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.753 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.753 [2024-07-15 07:09:42.601678] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:33.753 [2024-07-15 07:09:42.601778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60829 ] 00:05:34.012 [2024-07-15 07:09:42.736457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.012 [2024-07-15 07:09:42.798916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.012 [2024-07-15 07:09:42.829986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60832 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60832 /var/tmp/spdk2.sock 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60832 /var/tmp/spdk2.sock 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60832 /var/tmp/spdk2.sock 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60832 ']' 00:05:34.012 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.271 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.271 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.271 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.271 07:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.271 [2024-07-15 07:09:43.024632] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:34.271 [2024-07-15 07:09:43.024749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60832 ] 00:05:34.271 [2024-07-15 07:09:43.171288] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60829 has claimed it. 00:05:34.271 [2024-07-15 07:09:43.171387] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.836 ERROR: process (pid: 60832) is no longer running 00:05:34.836 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60832) - No such process 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60829 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60829 00:05:34.836 07:09:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60829 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60829 ']' 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60829 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60829 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.401 killing process with pid 60829 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60829' 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60829 00:05:35.401 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60829 00:05:35.659 00:05:35.659 real 0m1.883s 00:05:35.659 user 0m2.203s 00:05:35.659 sys 0m0.485s 00:05:35.659 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.659 ************************************ 00:05:35.659 END TEST locking_app_on_locked_coremask 00:05:35.659 ************************************ 00:05:35.659 07:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.659 07:09:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:35.659 07:09:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:35.659 07:09:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.659 07:09:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.659 07:09:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.659 ************************************ 00:05:35.659 START TEST locking_overlapped_coremask 00:05:35.659 ************************************ 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60883 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60883 /var/tmp/spdk.sock 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60883 ']' 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.659 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.659 [2024-07-15 07:09:44.542008] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:35.659 [2024-07-15 07:09:44.542677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60883 ] 00:05:35.926 [2024-07-15 07:09:44.682374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.926 [2024-07-15 07:09:44.746697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.926 [2024-07-15 07:09:44.746836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.926 [2024-07-15 07:09:44.746842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.926 [2024-07-15 07:09:44.777968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60888 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60888 /var/tmp/spdk2.sock 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60888 /var/tmp/spdk2.sock 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60888 /var/tmp/spdk2.sock 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60888 ']' 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.185 07:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.185 [2024-07-15 07:09:44.980281] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:36.185 [2024-07-15 07:09:44.980371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60888 ] 00:05:36.185 [2024-07-15 07:09:45.126628] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60883 has claimed it. 00:05:36.185 [2024-07-15 07:09:45.126683] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.750 ERROR: process (pid: 60888) is no longer running 00:05:36.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60888) - No such process 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.750 07:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60883 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60883 ']' 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60883 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60883 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.008 killing process with pid 60883 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60883' 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60883 00:05:37.008 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60883 00:05:37.267 00:05:37.267 real 0m1.509s 00:05:37.267 user 0m4.077s 00:05:37.267 sys 0m0.305s 00:05:37.267 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.267 07:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.267 ************************************ 00:05:37.267 END TEST locking_overlapped_coremask 00:05:37.267 ************************************ 00:05:37.267 07:09:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.267 07:09:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:37.267 07:09:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.267 07:09:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.267 07:09:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.267 ************************************ 00:05:37.267 START TEST locking_overlapped_coremask_via_rpc 00:05:37.267 ************************************ 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60929 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60929 /var/tmp/spdk.sock 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60929 ']' 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.267 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.267 [2024-07-15 07:09:46.091347] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:37.267 [2024-07-15 07:09:46.091476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60929 ] 00:05:37.525 [2024-07-15 07:09:46.228313] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.525 [2024-07-15 07:09:46.228362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.525 [2024-07-15 07:09:46.287985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.525 [2024-07-15 07:09:46.288130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.525 [2024-07-15 07:09:46.288131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.525 [2024-07-15 07:09:46.317229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60944 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60944 /var/tmp/spdk2.sock 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60944 ']' 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.525 07:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.783 [2024-07-15 07:09:46.528047] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:37.783 [2024-07-15 07:09:46.528181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60944 ] 00:05:37.783 [2024-07-15 07:09:46.681242] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.783 [2024-07-15 07:09:46.681305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.040 [2024-07-15 07:09:46.801489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.040 [2024-07-15 07:09:46.801563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.040 [2024-07-15 07:09:46.801562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:38.040 [2024-07-15 07:09:46.857855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.604 [2024-07-15 07:09:47.467241] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60929 has claimed it. 00:05:38.604 request: 00:05:38.604 { 00:05:38.604 "method": "framework_enable_cpumask_locks", 00:05:38.604 "req_id": 1 00:05:38.604 } 00:05:38.604 Got JSON-RPC error response 00:05:38.604 response: 00:05:38.604 { 00:05:38.604 "code": -32603, 00:05:38.604 "message": "Failed to claim CPU core: 2" 00:05:38.604 } 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60929 /var/tmp/spdk.sock 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60929 ']' 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.604 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60944 /var/tmp/spdk2.sock 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60944 ']' 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.861 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.119 00:05:39.119 real 0m1.959s 00:05:39.119 user 0m1.150s 00:05:39.119 sys 0m0.153s 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.119 07:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.119 ************************************ 00:05:39.119 END TEST locking_overlapped_coremask_via_rpc 00:05:39.119 ************************************ 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:39.119 07:09:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:39.119 07:09:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60929 ]] 00:05:39.119 07:09:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60929 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60929 ']' 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60929 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60929 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.119 killing process with pid 60929 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60929' 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60929 00:05:39.119 07:09:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60929 00:05:39.378 07:09:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60944 ]] 00:05:39.378 07:09:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60944 00:05:39.378 07:09:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60944 ']' 00:05:39.378 07:09:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60944 00:05:39.378 07:09:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:39.378 07:09:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.378 07:09:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60944 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:39.636 killing process with pid 60944 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60944' 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60944 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60944 00:05:39.636 07:09:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.636 07:09:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:39.636 07:09:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60929 ]] 00:05:39.636 07:09:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60929 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60929 ']' 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60929 00:05:39.636 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60929) - No such process 00:05:39.636 Process with pid 60929 is not found 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60929 is not found' 00:05:39.636 07:09:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60944 ]] 00:05:39.636 07:09:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60944 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60944 ']' 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60944 00:05:39.636 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60944) - No such process 00:05:39.636 Process with pid 60944 is not found 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60944 is not found' 00:05:39.636 07:09:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.636 00:05:39.636 real 0m16.899s 00:05:39.636 user 0m29.448s 00:05:39.636 sys 0m4.317s 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.636 07:09:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.636 ************************************ 00:05:39.636 END TEST cpu_locks 00:05:39.636 ************************************ 00:05:39.895 07:09:48 event -- common/autotest_common.sh@1142 -- # return 0 00:05:39.895 00:05:39.895 real 0m44.065s 00:05:39.895 user 1m26.279s 00:05:39.895 sys 0m7.690s 00:05:39.895 07:09:48 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.895 07:09:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.895 ************************************ 00:05:39.895 END TEST event 00:05:39.895 ************************************ 00:05:39.895 07:09:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.895 07:09:48 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:39.895 07:09:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.895 07:09:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.895 07:09:48 -- common/autotest_common.sh@10 -- # set +x 00:05:39.895 ************************************ 00:05:39.895 START TEST thread 00:05:39.895 ************************************ 00:05:39.895 07:09:48 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:39.895 * Looking for test storage... 00:05:39.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:39.895 07:09:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:39.895 07:09:48 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:39.895 07:09:48 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.895 07:09:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.895 ************************************ 00:05:39.895 START TEST thread_poller_perf 00:05:39.895 ************************************ 00:05:39.895 07:09:48 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:39.895 [2024-07-15 07:09:48.771194] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:39.895 [2024-07-15 07:09:48.771282] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61061 ] 00:05:40.155 [2024-07-15 07:09:48.906197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.155 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:40.155 [2024-07-15 07:09:48.967133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.529 ====================================== 00:05:41.529 busy:2210985902 (cyc) 00:05:41.529 total_run_count: 306000 00:05:41.529 tsc_hz: 2200000000 (cyc) 00:05:41.529 ====================================== 00:05:41.529 poller_cost: 7225 (cyc), 3284 (nsec) 00:05:41.529 00:05:41.529 real 0m1.294s 00:05:41.529 user 0m1.144s 00:05:41.529 sys 0m0.044s 00:05:41.529 07:09:50 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.529 ************************************ 00:05:41.529 END TEST thread_poller_perf 00:05:41.529 ************************************ 00:05:41.529 07:09:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.529 07:09:50 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:41.530 07:09:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.530 07:09:50 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:41.530 07:09:50 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.530 07:09:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.530 ************************************ 00:05:41.530 START TEST thread_poller_perf 00:05:41.530 ************************************ 00:05:41.530 07:09:50 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.530 [2024-07-15 07:09:50.115181] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:41.530 [2024-07-15 07:09:50.115277] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61092 ] 00:05:41.530 [2024-07-15 07:09:50.253490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.530 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:41.530 [2024-07-15 07:09:50.312837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.491 ====================================== 00:05:42.491 busy:2201750836 (cyc) 00:05:42.491 total_run_count: 4074000 00:05:42.491 tsc_hz: 2200000000 (cyc) 00:05:42.491 ====================================== 00:05:42.491 poller_cost: 540 (cyc), 245 (nsec) 00:05:42.491 00:05:42.491 real 0m1.287s 00:05:42.491 user 0m1.141s 00:05:42.491 sys 0m0.039s 00:05:42.491 07:09:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.491 ************************************ 00:05:42.491 END TEST thread_poller_perf 00:05:42.491 ************************************ 00:05:42.491 07:09:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.491 07:09:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:42.491 07:09:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:42.491 ************************************ 00:05:42.491 END TEST thread 00:05:42.491 ************************************ 00:05:42.491 00:05:42.491 real 0m2.749s 00:05:42.491 user 0m2.346s 00:05:42.491 sys 0m0.186s 00:05:42.491 07:09:51 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.491 07:09:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.751 07:09:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.751 07:09:51 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:42.751 07:09:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.751 07:09:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.751 07:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:42.751 ************************************ 00:05:42.751 START TEST accel 00:05:42.751 ************************************ 00:05:42.751 07:09:51 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:42.751 * Looking for test storage... 00:05:42.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:42.751 07:09:51 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:42.751 07:09:51 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:42.751 07:09:51 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.751 07:09:51 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61166 00:05:42.751 07:09:51 accel -- accel/accel.sh@63 -- # waitforlisten 61166 00:05:42.751 07:09:51 accel -- common/autotest_common.sh@829 -- # '[' -z 61166 ']' 00:05:42.751 07:09:51 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.751 07:09:51 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:42.751 07:09:51 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:42.751 07:09:51 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.751 07:09:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.751 07:09:51 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.751 07:09:51 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.751 07:09:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.751 07:09:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.751 07:09:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.751 07:09:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.751 07:09:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.751 07:09:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:42.751 07:09:51 accel -- accel/accel.sh@41 -- # jq -r . 00:05:42.751 [2024-07-15 07:09:51.610525] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:42.751 [2024-07-15 07:09:51.610640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61166 ] 00:05:43.010 [2024-07-15 07:09:51.755618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.010 [2024-07-15 07:09:51.825478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.010 [2024-07-15 07:09:51.859596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.946 07:09:52 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.946 07:09:52 accel -- common/autotest_common.sh@862 -- # return 0 00:05:43.946 07:09:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:43.946 07:09:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:43.946 07:09:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:43.946 07:09:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:43.946 07:09:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:43.946 07:09:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:43.946 07:09:52 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.946 07:09:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.946 07:09:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:43.946 07:09:52 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # IFS== 00:05:43.946 07:09:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:43.946 07:09:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:43.946 07:09:52 accel -- accel/accel.sh@75 -- # killprocess 61166 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@948 -- # '[' -z 61166 ']' 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@952 -- # kill -0 61166 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@953 -- # uname 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61166 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61166' 00:05:43.947 killing process with pid 61166 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@967 -- # kill 61166 00:05:43.947 07:09:52 accel -- common/autotest_common.sh@972 -- # wait 61166 00:05:44.206 07:09:52 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:44.206 07:09:52 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:44.206 07:09:52 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:44.206 07:09:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.206 07:09:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 07:09:52 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:44.206 07:09:52 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:44.206 07:09:52 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.206 07:09:52 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 07:09:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.206 07:09:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:44.206 07:09:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.206 07:09:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.206 07:09:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 ************************************ 00:05:44.206 START TEST accel_missing_filename 00:05:44.206 ************************************ 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.206 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:44.206 07:09:53 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:44.206 [2024-07-15 07:09:53.046706] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:44.206 [2024-07-15 07:09:53.046841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61217 ] 00:05:44.465 [2024-07-15 07:09:53.180412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.465 [2024-07-15 07:09:53.248987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.465 [2024-07-15 07:09:53.282027] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.465 [2024-07-15 07:09:53.323826] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:44.465 A filename is required. 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.465 00:05:44.465 real 0m0.375s 00:05:44.465 user 0m0.243s 00:05:44.465 sys 0m0.083s 00:05:44.465 ************************************ 00:05:44.465 END TEST accel_missing_filename 00:05:44.465 ************************************ 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.465 07:09:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:44.723 07:09:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.724 07:09:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:44.724 07:09:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:44.724 07:09:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.724 07:09:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.724 ************************************ 00:05:44.724 START TEST accel_compress_verify 00:05:44.724 ************************************ 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.724 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:44.724 07:09:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:44.724 [2024-07-15 07:09:53.464024] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:44.724 [2024-07-15 07:09:53.464134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61242 ] 00:05:44.724 [2024-07-15 07:09:53.595557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.724 [2024-07-15 07:09:53.654112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.982 [2024-07-15 07:09:53.684455] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.982 [2024-07-15 07:09:53.723855] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:44.982 00:05:44.982 Compression does not support the verify option, aborting. 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:44.982 ************************************ 00:05:44.982 END TEST accel_compress_verify 00:05:44.982 ************************************ 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.982 00:05:44.982 real 0m0.355s 00:05:44.982 user 0m0.228s 00:05:44.982 sys 0m0.071s 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.982 07:09:53 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.982 07:09:53 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.982 ************************************ 00:05:44.982 START TEST accel_wrong_workload 00:05:44.982 ************************************ 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:44.982 07:09:53 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:44.982 Unsupported workload type: foobar 00:05:44.982 [2024-07-15 07:09:53.867492] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:44.982 accel_perf options: 00:05:44.982 [-h help message] 00:05:44.982 [-q queue depth per core] 00:05:44.982 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:44.982 [-T number of threads per core 00:05:44.982 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:44.982 [-t time in seconds] 00:05:44.982 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:44.982 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:44.982 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:44.982 [-l for compress/decompress workloads, name of uncompressed input file 00:05:44.982 [-S for crc32c workload, use this seed value (default 0) 00:05:44.982 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:44.982 [-f for fill workload, use this BYTE value (default 255) 00:05:44.982 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:44.982 [-y verify result if this switch is on] 00:05:44.982 [-a tasks to allocate per core (default: same value as -q)] 00:05:44.982 Can be used to spread operations across a wider range of memory. 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.982 00:05:44.982 real 0m0.029s 00:05:44.982 user 0m0.013s 00:05:44.982 sys 0m0.016s 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.982 ************************************ 00:05:44.982 END TEST accel_wrong_workload 00:05:44.982 ************************************ 00:05:44.982 07:09:53 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.982 07:09:53 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.982 07:09:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.982 ************************************ 00:05:44.982 START TEST accel_negative_buffers 00:05:44.982 ************************************ 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.982 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:44.982 07:09:53 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:45.241 -x option must be non-negative. 00:05:45.241 [2024-07-15 07:09:53.941503] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:45.241 accel_perf options: 00:05:45.241 [-h help message] 00:05:45.241 [-q queue depth per core] 00:05:45.241 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:45.241 [-T number of threads per core 00:05:45.241 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:45.241 [-t time in seconds] 00:05:45.241 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:45.241 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:45.241 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:45.241 [-l for compress/decompress workloads, name of uncompressed input file 00:05:45.241 [-S for crc32c workload, use this seed value (default 0) 00:05:45.241 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:45.241 [-f for fill workload, use this BYTE value (default 255) 00:05:45.241 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:45.241 [-y verify result if this switch is on] 00:05:45.241 [-a tasks to allocate per core (default: same value as -q)] 00:05:45.241 Can be used to spread operations across a wider range of memory. 00:05:45.241 ************************************ 00:05:45.241 END TEST accel_negative_buffers 00:05:45.241 ************************************ 00:05:45.241 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:45.241 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.241 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.241 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.241 00:05:45.241 real 0m0.031s 00:05:45.241 user 0m0.019s 00:05:45.241 sys 0m0.012s 00:05:45.241 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.241 07:09:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:45.241 07:09:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.241 07:09:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:45.241 07:09:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.241 07:09:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.241 07:09:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.241 ************************************ 00:05:45.241 START TEST accel_crc32c 00:05:45.241 ************************************ 00:05:45.241 07:09:53 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:45.241 07:09:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:45.241 [2024-07-15 07:09:54.011435] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:45.241 [2024-07-15 07:09:54.011518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61300 ] 00:05:45.241 [2024-07-15 07:09:54.144572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.498 [2024-07-15 07:09:54.202721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.498 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.499 07:09:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:46.433 07:09:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.433 00:05:46.433 real 0m1.355s 00:05:46.434 user 0m1.200s 00:05:46.434 sys 0m0.064s 00:05:46.434 07:09:55 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.434 ************************************ 00:05:46.434 END TEST accel_crc32c 00:05:46.434 ************************************ 00:05:46.434 07:09:55 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:46.434 07:09:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.434 07:09:55 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:46.434 07:09:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:46.434 07:09:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.692 07:09:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.692 ************************************ 00:05:46.692 START TEST accel_crc32c_C2 00:05:46.692 ************************************ 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:46.692 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:46.692 [2024-07-15 07:09:55.417939] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:46.692 [2024-07-15 07:09:55.418065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61335 ] 00:05:46.692 [2024-07-15 07:09:55.557333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.692 [2024-07-15 07:09:55.615346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.951 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.952 07:09:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.887 00:05:47.887 real 0m1.370s 00:05:47.887 user 0m1.202s 00:05:47.887 sys 0m0.080s 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.887 ************************************ 00:05:47.887 END TEST accel_crc32c_C2 00:05:47.887 ************************************ 00:05:47.887 07:09:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:47.887 07:09:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.887 07:09:56 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:47.887 07:09:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.887 07:09:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.887 07:09:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.887 ************************************ 00:05:47.887 START TEST accel_copy 00:05:47.887 ************************************ 00:05:47.887 07:09:56 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:47.887 07:09:56 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:47.887 [2024-07-15 07:09:56.835654] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:47.887 [2024-07-15 07:09:56.835733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61368 ] 00:05:48.146 [2024-07-15 07:09:56.968310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.146 [2024-07-15 07:09:57.026777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.146 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.147 07:09:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:49.523 07:09:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.523 00:05:49.523 real 0m1.359s 00:05:49.523 user 0m1.200s 00:05:49.523 sys 0m0.070s 00:05:49.523 07:09:58 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.523 07:09:58 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:49.523 ************************************ 00:05:49.523 END TEST accel_copy 00:05:49.523 ************************************ 00:05:49.523 07:09:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.523 07:09:58 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.523 07:09:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:49.523 07:09:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.523 07:09:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.523 ************************************ 00:05:49.523 START TEST accel_fill 00:05:49.523 ************************************ 00:05:49.523 07:09:58 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:49.523 07:09:58 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:49.523 [2024-07-15 07:09:58.247022] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:49.523 [2024-07-15 07:09:58.247178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61398 ] 00:05:49.523 [2024-07-15 07:09:58.387029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.523 [2024-07-15 07:09:58.444812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.782 07:09:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:50.716 07:09:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.716 00:05:50.716 real 0m1.368s 00:05:50.716 user 0m1.199s 00:05:50.716 sys 0m0.080s 00:05:50.716 07:09:59 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.716 07:09:59 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:50.716 ************************************ 00:05:50.716 END TEST accel_fill 00:05:50.716 ************************************ 00:05:50.716 07:09:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.716 07:09:59 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:50.716 07:09:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.716 07:09:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.716 07:09:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.716 ************************************ 00:05:50.716 START TEST accel_copy_crc32c 00:05:50.716 ************************************ 00:05:50.716 07:09:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:50.716 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:50.716 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:50.717 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:50.717 [2024-07-15 07:09:59.662770] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:50.717 [2024-07-15 07:09:59.662896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61433 ] 00:05:50.976 [2024-07-15 07:09:59.805618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.976 [2024-07-15 07:09:59.863420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.976 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.977 07:09:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.390 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.391 00:05:52.391 real 0m1.370s 00:05:52.391 user 0m1.202s 00:05:52.391 sys 0m0.078s 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.391 ************************************ 00:05:52.391 END TEST accel_copy_crc32c 00:05:52.391 ************************************ 00:05:52.391 07:10:01 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:52.391 07:10:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.391 07:10:01 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:52.391 07:10:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:52.391 07:10:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.391 07:10:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.391 ************************************ 00:05:52.391 START TEST accel_copy_crc32c_C2 00:05:52.391 ************************************ 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:52.391 [2024-07-15 07:10:01.077161] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:52.391 [2024-07-15 07:10:01.077258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61467 ] 00:05:52.391 [2024-07-15 07:10:01.214235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.391 [2024-07-15 07:10:01.272064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.391 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.673 07:10:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.610 00:05:53.610 real 0m1.366s 00:05:53.610 user 0m1.197s 00:05:53.610 sys 0m0.080s 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.610 07:10:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:53.610 ************************************ 00:05:53.610 END TEST accel_copy_crc32c_C2 00:05:53.610 ************************************ 00:05:53.610 07:10:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.610 07:10:02 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:53.610 07:10:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:53.610 07:10:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.610 07:10:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.610 ************************************ 00:05:53.610 START TEST accel_dualcast 00:05:53.610 ************************************ 00:05:53.610 07:10:02 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:53.610 07:10:02 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:53.610 [2024-07-15 07:10:02.489983] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:53.610 [2024-07-15 07:10:02.490100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61502 ] 00:05:53.869 [2024-07-15 07:10:02.629562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.869 [2024-07-15 07:10:02.687976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.869 07:10:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:55.245 07:10:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.245 00:05:55.245 real 0m1.365s 00:05:55.245 user 0m1.202s 00:05:55.245 sys 0m0.067s 00:05:55.245 07:10:03 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.245 07:10:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:55.245 ************************************ 00:05:55.245 END TEST accel_dualcast 00:05:55.245 ************************************ 00:05:55.245 07:10:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.245 07:10:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:55.245 07:10:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:55.245 07:10:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.245 07:10:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.245 ************************************ 00:05:55.245 START TEST accel_compare 00:05:55.245 ************************************ 00:05:55.245 07:10:03 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:55.245 07:10:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:55.245 [2024-07-15 07:10:03.898507] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:55.245 [2024-07-15 07:10:03.898638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61531 ] 00:05:55.245 [2024-07-15 07:10:04.037837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.245 [2024-07-15 07:10:04.125059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.245 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.246 07:10:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:56.622 07:10:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.622 00:05:56.622 real 0m1.403s 00:05:56.622 user 0m1.227s 00:05:56.622 sys 0m0.082s 00:05:56.622 07:10:05 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.622 ************************************ 00:05:56.622 END TEST accel_compare 00:05:56.622 ************************************ 00:05:56.622 07:10:05 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:56.622 07:10:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.622 07:10:05 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:56.622 07:10:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:56.622 07:10:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.622 07:10:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.622 ************************************ 00:05:56.622 START TEST accel_xor 00:05:56.622 ************************************ 00:05:56.622 07:10:05 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:56.622 [2024-07-15 07:10:05.344624] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:56.622 [2024-07-15 07:10:05.344745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61565 ] 00:05:56.622 [2024-07-15 07:10:05.481050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.622 [2024-07-15 07:10:05.539439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.622 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.882 07:10:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.817 ************************************ 00:05:57.817 END TEST accel_xor 00:05:57.817 ************************************ 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:57.817 07:10:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.817 00:05:57.817 real 0m1.363s 00:05:57.817 user 0m1.200s 00:05:57.817 sys 0m0.073s 00:05:57.817 07:10:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.817 07:10:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:57.818 07:10:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.818 07:10:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:57.818 07:10:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:57.818 07:10:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.818 07:10:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.818 ************************************ 00:05:57.818 START TEST accel_xor 00:05:57.818 ************************************ 00:05:57.818 07:10:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:57.818 07:10:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:57.818 [2024-07-15 07:10:06.758779] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:57.818 [2024-07-15 07:10:06.758909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61600 ] 00:05:58.076 [2024-07-15 07:10:06.904051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.076 [2024-07-15 07:10:06.972737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.076 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.077 07:10:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:59.452 07:10:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.452 00:05:59.452 real 0m1.393s 00:05:59.452 user 0m1.216s 00:05:59.452 sys 0m0.082s 00:05:59.452 07:10:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.452 07:10:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:59.452 ************************************ 00:05:59.452 END TEST accel_xor 00:05:59.452 ************************************ 00:05:59.452 07:10:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.452 07:10:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:59.452 07:10:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:59.452 07:10:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.452 07:10:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.452 ************************************ 00:05:59.452 START TEST accel_dif_verify 00:05:59.452 ************************************ 00:05:59.452 07:10:08 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:59.452 07:10:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:59.452 [2024-07-15 07:10:08.201786] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:05:59.452 [2024-07-15 07:10:08.201875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61629 ] 00:05:59.452 [2024-07-15 07:10:08.335790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.452 [2024-07-15 07:10:08.394189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.711 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.711 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.711 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.711 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.711 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.711 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.711 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.712 07:10:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.649 ************************************ 00:06:00.649 END TEST accel_dif_verify 00:06:00.649 ************************************ 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:00.649 07:10:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.649 00:06:00.649 real 0m1.367s 00:06:00.649 user 0m1.208s 00:06:00.649 sys 0m0.064s 00:06:00.649 07:10:09 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.649 07:10:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:00.649 07:10:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.649 07:10:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:00.649 07:10:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:00.649 07:10:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.649 07:10:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.649 ************************************ 00:06:00.649 START TEST accel_dif_generate 00:06:00.649 ************************************ 00:06:00.649 07:10:09 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:00.649 07:10:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:00.920 [2024-07-15 07:10:09.620739] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:00.920 [2024-07-15 07:10:09.620841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61669 ] 00:06:00.921 [2024-07-15 07:10:09.761689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.921 [2024-07-15 07:10:09.825818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.921 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:01.191 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.192 07:10:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.127 ************************************ 00:06:02.127 END TEST accel_dif_generate 00:06:02.127 ************************************ 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:02.127 07:10:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.127 00:06:02.127 real 0m1.378s 00:06:02.127 user 0m1.203s 00:06:02.127 sys 0m0.081s 00:06:02.127 07:10:10 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.127 07:10:10 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:02.127 07:10:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.127 07:10:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:02.127 07:10:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:02.127 07:10:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.127 07:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.127 ************************************ 00:06:02.127 START TEST accel_dif_generate_copy 00:06:02.127 ************************************ 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:02.127 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:02.127 [2024-07-15 07:10:11.040721] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:02.127 [2024-07-15 07:10:11.040811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61698 ] 00:06:02.387 [2024-07-15 07:10:11.172990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.387 [2024-07-15 07:10:11.235168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.387 07:10:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.762 00:06:03.762 real 0m1.365s 00:06:03.762 user 0m1.194s 00:06:03.762 sys 0m0.072s 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.762 ************************************ 00:06:03.762 END TEST accel_dif_generate_copy 00:06:03.762 ************************************ 00:06:03.762 07:10:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:03.762 07:10:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.762 07:10:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:03.762 07:10:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.762 07:10:12 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:03.762 07:10:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.762 07:10:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.762 ************************************ 00:06:03.762 START TEST accel_comp 00:06:03.762 ************************************ 00:06:03.762 07:10:12 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:03.762 [2024-07-15 07:10:12.458548] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:03.762 [2024-07-15 07:10:12.458646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61727 ] 00:06:03.762 [2024-07-15 07:10:12.597026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.762 [2024-07-15 07:10:12.656143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.763 07:10:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.137 ************************************ 00:06:05.137 END TEST accel_comp 00:06:05.137 ************************************ 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:05.137 07:10:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.137 00:06:05.137 real 0m1.377s 00:06:05.137 user 0m1.206s 00:06:05.137 sys 0m0.077s 00:06:05.137 07:10:13 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.137 07:10:13 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:05.137 07:10:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.137 07:10:13 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.137 07:10:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:05.137 07:10:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.137 07:10:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.137 ************************************ 00:06:05.137 START TEST accel_decomp 00:06:05.137 ************************************ 00:06:05.137 07:10:13 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:05.137 07:10:13 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:05.137 [2024-07-15 07:10:13.883189] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:05.137 [2024-07-15 07:10:13.883330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61767 ] 00:06:05.137 [2024-07-15 07:10:14.015827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.137 [2024-07-15 07:10:14.075540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.396 07:10:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.328 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.329 07:10:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.329 00:06:06.329 real 0m1.374s 00:06:06.329 user 0m1.211s 00:06:06.329 sys 0m0.076s 00:06:06.329 07:10:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.329 07:10:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:06.329 ************************************ 00:06:06.329 END TEST accel_decomp 00:06:06.329 ************************************ 00:06:06.329 07:10:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.329 07:10:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:06.329 07:10:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:06.329 07:10:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.329 07:10:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.329 ************************************ 00:06:06.329 START TEST accel_decomp_full 00:06:06.329 ************************************ 00:06:06.329 07:10:15 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:06.329 07:10:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:06.587 [2024-07-15 07:10:15.296506] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:06.587 [2024-07-15 07:10:15.296634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61796 ] 00:06:06.587 [2024-07-15 07:10:15.438393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.587 [2024-07-15 07:10:15.497854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.587 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.845 07:10:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.780 07:10:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.780 00:06:07.780 real 0m1.392s 00:06:07.780 user 0m1.232s 00:06:07.780 sys 0m0.071s 00:06:07.780 07:10:16 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.780 07:10:16 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:07.780 ************************************ 00:06:07.780 END TEST accel_decomp_full 00:06:07.780 ************************************ 00:06:07.780 07:10:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.780 07:10:16 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:07.780 07:10:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:07.780 07:10:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.780 07:10:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.780 ************************************ 00:06:07.780 START TEST accel_decomp_mcore 00:06:07.780 ************************************ 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:07.780 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:08.039 [2024-07-15 07:10:16.738618] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:08.039 [2024-07-15 07:10:16.738724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:06:08.039 [2024-07-15 07:10:16.879342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.039 [2024-07-15 07:10:16.952651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.039 [2024-07-15 07:10:16.952789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.039 [2024-07-15 07:10:16.952912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.039 [2024-07-15 07:10:16.952916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.297 07:10:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.232 00:06:09.232 real 0m1.405s 00:06:09.232 user 0m4.434s 00:06:09.232 sys 0m0.093s 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.232 ************************************ 00:06:09.232 07:10:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:09.232 END TEST accel_decomp_mcore 00:06:09.232 ************************************ 00:06:09.232 07:10:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.232 07:10:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:09.232 07:10:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:09.232 07:10:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.232 07:10:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.232 ************************************ 00:06:09.232 START TEST accel_decomp_full_mcore 00:06:09.232 ************************************ 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:09.232 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:09.495 [2024-07-15 07:10:18.196230] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:09.495 [2024-07-15 07:10:18.196359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61868 ] 00:06:09.495 [2024-07-15 07:10:18.342715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.495 [2024-07-15 07:10:18.404897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.495 [2024-07-15 07:10:18.405020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.495 [2024-07-15 07:10:18.405155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.495 [2024-07-15 07:10:18.405155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.495 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.763 07:10:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.713 00:06:10.713 real 0m1.405s 00:06:10.713 user 0m4.457s 00:06:10.713 sys 0m0.099s 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.713 ************************************ 00:06:10.713 END TEST accel_decomp_full_mcore 00:06:10.713 ************************************ 00:06:10.713 07:10:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:10.713 07:10:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.713 07:10:19 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:10.713 07:10:19 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:10.713 07:10:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.713 07:10:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.713 ************************************ 00:06:10.713 START TEST accel_decomp_mthread 00:06:10.713 ************************************ 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:10.714 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:10.714 [2024-07-15 07:10:19.649762] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:10.714 [2024-07-15 07:10:19.649895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61900 ] 00:06:10.973 [2024-07-15 07:10:19.788841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.973 [2024-07-15 07:10:19.845434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.973 07:10:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.353 00:06:12.353 real 0m1.382s 00:06:12.353 user 0m1.213s 00:06:12.353 sys 0m0.077s 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.353 07:10:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:12.353 ************************************ 00:06:12.353 END TEST accel_decomp_mthread 00:06:12.353 ************************************ 00:06:12.353 07:10:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.354 07:10:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.354 07:10:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:12.354 07:10:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.354 07:10:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.354 ************************************ 00:06:12.354 START TEST accel_decomp_full_mthread 00:06:12.354 ************************************ 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:12.354 [2024-07-15 07:10:21.073468] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:12.354 [2024-07-15 07:10:21.073581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:06:12.354 [2024-07-15 07:10:21.206065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.354 [2024-07-15 07:10:21.266864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.354 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.613 07:10:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.550 00:06:13.550 real 0m1.401s 00:06:13.550 user 0m1.236s 00:06:13.550 sys 0m0.071s 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.550 ************************************ 00:06:13.550 END TEST accel_decomp_full_mthread 00:06:13.550 ************************************ 00:06:13.550 07:10:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:13.550 07:10:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.550 07:10:22 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:13.550 07:10:22 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:13.550 07:10:22 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:13.550 07:10:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.550 07:10:22 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:13.550 07:10:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.550 07:10:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.550 07:10:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.550 07:10:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.550 07:10:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.550 07:10:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.550 07:10:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:13.550 07:10:22 accel -- accel/accel.sh@41 -- # jq -r . 00:06:13.808 ************************************ 00:06:13.808 START TEST accel_dif_functional_tests 00:06:13.808 ************************************ 00:06:13.808 07:10:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:13.808 [2024-07-15 07:10:22.558107] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:13.808 [2024-07-15 07:10:22.558213] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61970 ] 00:06:13.808 [2024-07-15 07:10:22.698957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.066 [2024-07-15 07:10:22.762146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.066 [2024-07-15 07:10:22.762307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.066 [2024-07-15 07:10:22.762312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.066 [2024-07-15 07:10:22.792127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.066 00:06:14.066 00:06:14.066 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.066 http://cunit.sourceforge.net/ 00:06:14.066 00:06:14.066 00:06:14.066 Suite: accel_dif 00:06:14.066 Test: verify: DIF generated, GUARD check ...passed 00:06:14.066 Test: verify: DIF generated, APPTAG check ...passed 00:06:14.066 Test: verify: DIF generated, REFTAG check ...passed 00:06:14.066 Test: verify: DIF not generated, GUARD check ...passed 00:06:14.066 Test: verify: DIF not generated, APPTAG check ...passed 00:06:14.066 Test: verify: DIF not generated, REFTAG check ...passed 00:06:14.066 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:14.066 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:14.066 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:14.066 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:14.066 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:14.066 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:14.066 Test: verify copy: DIF generated, GUARD check ...passed 00:06:14.066 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:14.066 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:14.066 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:14.066 Test: verify copy: DIF not generated, APPTAG check ...passed 00:06:14.066 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 07:10:22.812030] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:14.066 [2024-07-15 07:10:22.812109] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:14.066 [2024-07-15 07:10:22.812142] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:14.066 [2024-07-15 07:10:22.812205] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:14.066 [2024-07-15 07:10:22.812344] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:14.066 [2024-07-15 07:10:22.812501] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:14.066 [2024-07-15 07:10:22.812534] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:14.066 [2024-07-15 07:10:22.812567] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:14.066 passed 00:06:14.066 Test: generate copy: DIF generated, GUARD check ...passed 00:06:14.066 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:14.066 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:14.066 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:14.066 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:14.066 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:14.066 Test: generate copy: iovecs-len validate ...passed 00:06:14.066 Test: generate copy: buffer alignment validate ...passed 00:06:14.066 00:06:14.066 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.066 suites 1 1 n/a 0 0 00:06:14.066 tests 26 26 26 0 0 00:06:14.066 asserts 115 115 115 0 n/a 00:06:14.066 00:06:14.066 Elapsed time = 0.002 seconds 00:06:14.066 [2024-07-15 07:10:22.812809] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:14.066 00:06:14.066 real 0m0.460s 00:06:14.066 user 0m0.524s 00:06:14.066 sys 0m0.109s 00:06:14.066 ************************************ 00:06:14.066 END TEST accel_dif_functional_tests 00:06:14.066 07:10:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.066 07:10:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:14.066 ************************************ 00:06:14.066 07:10:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.066 00:06:14.066 real 0m31.545s 00:06:14.066 user 0m33.892s 00:06:14.066 sys 0m2.844s 00:06:14.066 07:10:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.066 ************************************ 00:06:14.066 END TEST accel 00:06:14.066 ************************************ 00:06:14.066 07:10:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.325 07:10:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.325 07:10:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:14.325 07:10:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.325 07:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.325 07:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.325 ************************************ 00:06:14.325 START TEST accel_rpc 00:06:14.325 ************************************ 00:06:14.325 07:10:23 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:14.325 * Looking for test storage... 00:06:14.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:14.325 07:10:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:14.325 07:10:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62042 00:06:14.325 07:10:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62042 00:06:14.325 07:10:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:14.325 07:10:23 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62042 ']' 00:06:14.325 07:10:23 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.325 07:10:23 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.325 07:10:23 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.325 07:10:23 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.325 07:10:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.325 [2024-07-15 07:10:23.203425] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:14.325 [2024-07-15 07:10:23.203508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62042 ] 00:06:14.583 [2024-07-15 07:10:23.342434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.583 [2024-07-15 07:10:23.404048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.518 07:10:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:15.518 07:10:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:15.518 07:10:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:15.518 07:10:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:15.518 07:10:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 ************************************ 00:06:15.518 START TEST accel_assign_opcode 00:06:15.518 ************************************ 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 [2024-07-15 07:10:24.212665] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 [2024-07-15 07:10:24.220654] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 [2024-07-15 07:10:24.259088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.518 software 00:06:15.518 00:06:15.518 real 0m0.203s 00:06:15.518 user 0m0.056s 00:06:15.518 sys 0m0.012s 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.518 07:10:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 ************************************ 00:06:15.518 END TEST accel_assign_opcode 00:06:15.518 ************************************ 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.518 07:10:24 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62042 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62042 ']' 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62042 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.518 07:10:24 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62042 00:06:15.776 07:10:24 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.776 07:10:24 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.776 killing process with pid 62042 00:06:15.776 07:10:24 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62042' 00:06:15.776 07:10:24 accel_rpc -- common/autotest_common.sh@967 -- # kill 62042 00:06:15.776 07:10:24 accel_rpc -- common/autotest_common.sh@972 -- # wait 62042 00:06:15.776 00:06:15.776 real 0m1.668s 00:06:15.776 user 0m1.890s 00:06:15.776 sys 0m0.346s 00:06:15.776 07:10:24 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.776 07:10:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.776 ************************************ 00:06:15.776 END TEST accel_rpc 00:06:15.776 ************************************ 00:06:16.034 07:10:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.035 07:10:24 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.035 07:10:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.035 07:10:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.035 07:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.035 ************************************ 00:06:16.035 START TEST app_cmdline 00:06:16.035 ************************************ 00:06:16.035 07:10:24 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.035 * Looking for test storage... 00:06:16.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:16.035 07:10:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:16.035 07:10:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62130 00:06:16.035 07:10:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:16.035 07:10:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62130 00:06:16.035 07:10:24 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62130 ']' 00:06:16.035 07:10:24 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.035 07:10:24 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.035 07:10:24 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.035 07:10:24 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.035 07:10:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.035 [2024-07-15 07:10:24.910705] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:16.035 [2024-07-15 07:10:24.910802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62130 ] 00:06:16.292 [2024-07-15 07:10:25.050238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.292 [2024-07-15 07:10:25.108905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.292 [2024-07-15 07:10:25.137825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.549 07:10:25 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.549 07:10:25 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:16.549 07:10:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:16.805 { 00:06:16.805 "version": "SPDK v24.09-pre git sha1 4835eb82b", 00:06:16.805 "fields": { 00:06:16.805 "major": 24, 00:06:16.805 "minor": 9, 00:06:16.805 "patch": 0, 00:06:16.805 "suffix": "-pre", 00:06:16.805 "commit": "4835eb82b" 00:06:16.805 } 00:06:16.805 } 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:16.806 07:10:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:16.806 07:10:25 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.062 request: 00:06:17.062 { 00:06:17.062 "method": "env_dpdk_get_mem_stats", 00:06:17.062 "req_id": 1 00:06:17.062 } 00:06:17.062 Got JSON-RPC error response 00:06:17.062 response: 00:06:17.062 { 00:06:17.062 "code": -32601, 00:06:17.062 "message": "Method not found" 00:06:17.062 } 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.062 07:10:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62130 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62130 ']' 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62130 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62130 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.062 killing process with pid 62130 00:06:17.062 07:10:25 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62130' 00:06:17.063 07:10:25 app_cmdline -- common/autotest_common.sh@967 -- # kill 62130 00:06:17.063 07:10:25 app_cmdline -- common/autotest_common.sh@972 -- # wait 62130 00:06:17.320 00:06:17.320 real 0m1.417s 00:06:17.320 user 0m1.919s 00:06:17.320 sys 0m0.343s 00:06:17.320 07:10:26 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.320 ************************************ 00:06:17.320 END TEST app_cmdline 00:06:17.320 07:10:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.320 ************************************ 00:06:17.320 07:10:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.320 07:10:26 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.320 07:10:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.320 07:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.320 07:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.320 ************************************ 00:06:17.320 START TEST version 00:06:17.320 ************************************ 00:06:17.320 07:10:26 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.578 * Looking for test storage... 00:06:17.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:17.578 07:10:26 version -- app/version.sh@17 -- # get_header_version major 00:06:17.578 07:10:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # cut -f2 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.578 07:10:26 version -- app/version.sh@17 -- # major=24 00:06:17.578 07:10:26 version -- app/version.sh@18 -- # get_header_version minor 00:06:17.578 07:10:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # cut -f2 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.578 07:10:26 version -- app/version.sh@18 -- # minor=9 00:06:17.578 07:10:26 version -- app/version.sh@19 -- # get_header_version patch 00:06:17.578 07:10:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # cut -f2 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.578 07:10:26 version -- app/version.sh@19 -- # patch=0 00:06:17.578 07:10:26 version -- app/version.sh@20 -- # get_header_version suffix 00:06:17.578 07:10:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # cut -f2 00:06:17.578 07:10:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.578 07:10:26 version -- app/version.sh@20 -- # suffix=-pre 00:06:17.578 07:10:26 version -- app/version.sh@22 -- # version=24.9 00:06:17.578 07:10:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:17.578 07:10:26 version -- app/version.sh@28 -- # version=24.9rc0 00:06:17.578 07:10:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:17.578 07:10:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:17.578 07:10:26 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:17.578 07:10:26 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:17.578 00:06:17.578 real 0m0.155s 00:06:17.578 user 0m0.091s 00:06:17.578 sys 0m0.095s 00:06:17.578 07:10:26 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.578 07:10:26 version -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 ************************************ 00:06:17.578 END TEST version 00:06:17.578 ************************************ 00:06:17.578 07:10:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.578 07:10:26 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:17.578 07:10:26 -- spdk/autotest.sh@198 -- # uname -s 00:06:17.578 07:10:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:17.578 07:10:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:17.578 07:10:26 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:17.578 07:10:26 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:17.578 07:10:26 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:17.578 07:10:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.578 07:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.578 07:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 ************************************ 00:06:17.578 START TEST spdk_dd 00:06:17.578 ************************************ 00:06:17.578 07:10:26 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:17.578 * Looking for test storage... 00:06:17.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:17.578 07:10:26 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:17.578 07:10:26 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.578 07:10:26 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.578 07:10:26 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.578 07:10:26 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.578 07:10:26 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.578 07:10:26 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.578 07:10:26 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:17.578 07:10:26 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.578 07:10:26 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:18.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.145 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:18.145 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:18.145 07:10:26 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:18.145 07:10:26 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:18.145 07:10:26 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:18.145 07:10:26 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:18.145 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:18.146 07:10:26 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:18.147 * spdk_dd linked to liburing 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:18.147 07:10:26 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:18.147 07:10:26 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:18.147 07:10:26 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:18.147 07:10:26 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:18.147 07:10:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:18.147 07:10:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.147 07:10:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:18.147 ************************************ 00:06:18.147 START TEST spdk_dd_basic_rw 00:06:18.147 ************************************ 00:06:18.147 07:10:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:18.147 * Looking for test storage... 00:06:18.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:18.147 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:18.406 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:18.406 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.407 ************************************ 00:06:18.407 START TEST dd_bs_lt_native_bs 00:06:18.407 ************************************ 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.407 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:18.407 { 00:06:18.407 "subsystems": [ 00:06:18.407 { 00:06:18.407 "subsystem": "bdev", 00:06:18.407 "config": [ 00:06:18.407 { 00:06:18.407 "params": { 00:06:18.407 "trtype": "pcie", 00:06:18.407 "traddr": "0000:00:10.0", 00:06:18.407 "name": "Nvme0" 00:06:18.407 }, 00:06:18.407 "method": "bdev_nvme_attach_controller" 00:06:18.407 }, 00:06:18.407 { 00:06:18.407 "method": "bdev_wait_for_examine" 00:06:18.407 } 00:06:18.407 ] 00:06:18.407 } 00:06:18.407 ] 00:06:18.407 } 00:06:18.407 [2024-07-15 07:10:27.313833] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:18.407 [2024-07-15 07:10:27.313935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62442 ] 00:06:18.667 [2024-07-15 07:10:27.455888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.667 [2024-07-15 07:10:27.532002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.667 [2024-07-15 07:10:27.566524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.926 [2024-07-15 07:10:27.661728] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:18.926 [2024-07-15 07:10:27.661819] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.926 [2024-07-15 07:10:27.742564] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.926 00:06:18.926 real 0m0.575s 00:06:18.926 user 0m0.416s 00:06:18.926 sys 0m0.111s 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:18.926 ************************************ 00:06:18.926 END TEST dd_bs_lt_native_bs 00:06:18.926 ************************************ 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.926 07:10:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.185 ************************************ 00:06:19.185 START TEST dd_rw 00:06:19.185 ************************************ 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:19.185 07:10:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.752 07:10:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:19.752 07:10:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:19.752 07:10:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.752 07:10:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.752 [2024-07-15 07:10:28.622539] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:19.752 [2024-07-15 07:10:28.622658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62474 ] 00:06:19.752 { 00:06:19.752 "subsystems": [ 00:06:19.752 { 00:06:19.752 "subsystem": "bdev", 00:06:19.752 "config": [ 00:06:19.752 { 00:06:19.752 "params": { 00:06:19.752 "trtype": "pcie", 00:06:19.752 "traddr": "0000:00:10.0", 00:06:19.752 "name": "Nvme0" 00:06:19.752 }, 00:06:19.752 "method": "bdev_nvme_attach_controller" 00:06:19.752 }, 00:06:19.752 { 00:06:19.752 "method": "bdev_wait_for_examine" 00:06:19.752 } 00:06:19.752 ] 00:06:19.752 } 00:06:19.752 ] 00:06:19.752 } 00:06:20.010 [2024-07-15 07:10:28.762803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.010 [2024-07-15 07:10:28.820583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.010 [2024-07-15 07:10:28.852326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.269  Copying: 60/60 [kB] (average 29 MBps) 00:06:20.269 00:06:20.269 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:20.269 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:20.269 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.269 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.269 { 00:06:20.269 "subsystems": [ 00:06:20.269 { 00:06:20.269 "subsystem": "bdev", 00:06:20.269 "config": [ 00:06:20.269 { 00:06:20.269 "params": { 00:06:20.269 "trtype": "pcie", 00:06:20.269 "traddr": "0000:00:10.0", 00:06:20.269 "name": "Nvme0" 00:06:20.269 }, 00:06:20.269 "method": "bdev_nvme_attach_controller" 00:06:20.269 }, 00:06:20.269 { 00:06:20.269 "method": "bdev_wait_for_examine" 00:06:20.269 } 00:06:20.269 ] 00:06:20.269 } 00:06:20.269 ] 00:06:20.269 } 00:06:20.269 [2024-07-15 07:10:29.164228] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:20.269 [2024-07-15 07:10:29.164321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62488 ] 00:06:20.528 [2024-07-15 07:10:29.302856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.528 [2024-07-15 07:10:29.362238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.528 [2024-07-15 07:10:29.392377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.786  Copying: 60/60 [kB] (average 29 MBps) 00:06:20.786 00:06:20.786 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.786 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:20.786 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:20.786 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:20.786 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:20.786 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:20.786 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:20.787 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:20.787 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:20.787 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.787 07:10:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.787 [2024-07-15 07:10:29.693682] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:20.787 [2024-07-15 07:10:29.693803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62503 ] 00:06:20.787 { 00:06:20.787 "subsystems": [ 00:06:20.787 { 00:06:20.787 "subsystem": "bdev", 00:06:20.787 "config": [ 00:06:20.787 { 00:06:20.787 "params": { 00:06:20.787 "trtype": "pcie", 00:06:20.787 "traddr": "0000:00:10.0", 00:06:20.787 "name": "Nvme0" 00:06:20.787 }, 00:06:20.787 "method": "bdev_nvme_attach_controller" 00:06:20.787 }, 00:06:20.787 { 00:06:20.787 "method": "bdev_wait_for_examine" 00:06:20.787 } 00:06:20.787 ] 00:06:20.787 } 00:06:20.787 ] 00:06:20.787 } 00:06:21.045 [2024-07-15 07:10:29.826119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.045 [2024-07-15 07:10:29.889259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.045 [2024-07-15 07:10:29.921482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.303  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:21.303 00:06:21.303 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:21.303 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:21.303 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:21.303 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:21.303 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:21.303 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:21.303 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.238 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:22.238 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:22.238 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.238 07:10:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.238 [2024-07-15 07:10:30.949567] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:22.238 [2024-07-15 07:10:30.949672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62528 ] 00:06:22.238 { 00:06:22.238 "subsystems": [ 00:06:22.238 { 00:06:22.238 "subsystem": "bdev", 00:06:22.238 "config": [ 00:06:22.238 { 00:06:22.238 "params": { 00:06:22.238 "trtype": "pcie", 00:06:22.238 "traddr": "0000:00:10.0", 00:06:22.238 "name": "Nvme0" 00:06:22.238 }, 00:06:22.238 "method": "bdev_nvme_attach_controller" 00:06:22.238 }, 00:06:22.238 { 00:06:22.238 "method": "bdev_wait_for_examine" 00:06:22.238 } 00:06:22.238 ] 00:06:22.238 } 00:06:22.238 ] 00:06:22.238 } 00:06:22.238 [2024-07-15 07:10:31.091299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.238 [2024-07-15 07:10:31.161007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.496 [2024-07-15 07:10:31.194852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.496  Copying: 60/60 [kB] (average 58 MBps) 00:06:22.496 00:06:22.496 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:22.496 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:22.496 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.496 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.754 { 00:06:22.754 "subsystems": [ 00:06:22.754 { 00:06:22.754 "subsystem": "bdev", 00:06:22.754 "config": [ 00:06:22.754 { 00:06:22.754 "params": { 00:06:22.754 "trtype": "pcie", 00:06:22.754 "traddr": "0000:00:10.0", 00:06:22.754 "name": "Nvme0" 00:06:22.754 }, 00:06:22.754 "method": "bdev_nvme_attach_controller" 00:06:22.754 }, 00:06:22.754 { 00:06:22.754 "method": "bdev_wait_for_examine" 00:06:22.754 } 00:06:22.754 ] 00:06:22.754 } 00:06:22.754 ] 00:06:22.754 } 00:06:22.754 [2024-07-15 07:10:31.495760] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:22.754 [2024-07-15 07:10:31.495854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62540 ] 00:06:22.754 [2024-07-15 07:10:31.632356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.754 [2024-07-15 07:10:31.692462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.014 [2024-07-15 07:10:31.722965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.272  Copying: 60/60 [kB] (average 58 MBps) 00:06:23.272 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.273 07:10:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.273 [2024-07-15 07:10:32.025810] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:23.273 [2024-07-15 07:10:32.025903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62559 ] 00:06:23.273 { 00:06:23.273 "subsystems": [ 00:06:23.273 { 00:06:23.273 "subsystem": "bdev", 00:06:23.273 "config": [ 00:06:23.273 { 00:06:23.273 "params": { 00:06:23.273 "trtype": "pcie", 00:06:23.273 "traddr": "0000:00:10.0", 00:06:23.273 "name": "Nvme0" 00:06:23.273 }, 00:06:23.273 "method": "bdev_nvme_attach_controller" 00:06:23.273 }, 00:06:23.273 { 00:06:23.273 "method": "bdev_wait_for_examine" 00:06:23.273 } 00:06:23.273 ] 00:06:23.273 } 00:06:23.273 ] 00:06:23.273 } 00:06:23.273 [2024-07-15 07:10:32.163985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.531 [2024-07-15 07:10:32.232196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.531 [2024-07-15 07:10:32.264783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.790  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:23.790 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:23.790 07:10:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.356 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:24.356 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:24.356 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.356 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.356 [2024-07-15 07:10:33.208286] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:24.356 [2024-07-15 07:10:33.208386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62578 ] 00:06:24.356 { 00:06:24.356 "subsystems": [ 00:06:24.356 { 00:06:24.356 "subsystem": "bdev", 00:06:24.356 "config": [ 00:06:24.356 { 00:06:24.356 "params": { 00:06:24.356 "trtype": "pcie", 00:06:24.356 "traddr": "0000:00:10.0", 00:06:24.356 "name": "Nvme0" 00:06:24.356 }, 00:06:24.356 "method": "bdev_nvme_attach_controller" 00:06:24.356 }, 00:06:24.356 { 00:06:24.356 "method": "bdev_wait_for_examine" 00:06:24.356 } 00:06:24.356 ] 00:06:24.356 } 00:06:24.356 ] 00:06:24.356 } 00:06:24.614 [2024-07-15 07:10:33.342881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.614 [2024-07-15 07:10:33.432665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.614 [2024-07-15 07:10:33.466487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.872  Copying: 56/56 [kB] (average 27 MBps) 00:06:24.872 00:06:24.872 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:24.872 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:24.872 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.872 07:10:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.872 [2024-07-15 07:10:33.757620] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:24.872 [2024-07-15 07:10:33.757710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62591 ] 00:06:24.872 { 00:06:24.872 "subsystems": [ 00:06:24.872 { 00:06:24.872 "subsystem": "bdev", 00:06:24.872 "config": [ 00:06:24.872 { 00:06:24.872 "params": { 00:06:24.872 "trtype": "pcie", 00:06:24.872 "traddr": "0000:00:10.0", 00:06:24.872 "name": "Nvme0" 00:06:24.872 }, 00:06:24.872 "method": "bdev_nvme_attach_controller" 00:06:24.872 }, 00:06:24.872 { 00:06:24.872 "method": "bdev_wait_for_examine" 00:06:24.872 } 00:06:24.872 ] 00:06:24.872 } 00:06:24.872 ] 00:06:24.872 } 00:06:25.130 [2024-07-15 07:10:33.893099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.130 [2024-07-15 07:10:33.951425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.130 [2024-07-15 07:10:33.980663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.387  Copying: 56/56 [kB] (average 27 MBps) 00:06:25.387 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.387 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.387 { 00:06:25.387 "subsystems": [ 00:06:25.387 { 00:06:25.387 "subsystem": "bdev", 00:06:25.387 "config": [ 00:06:25.387 { 00:06:25.387 "params": { 00:06:25.387 "trtype": "pcie", 00:06:25.387 "traddr": "0000:00:10.0", 00:06:25.387 "name": "Nvme0" 00:06:25.387 }, 00:06:25.387 "method": "bdev_nvme_attach_controller" 00:06:25.387 }, 00:06:25.387 { 00:06:25.387 "method": "bdev_wait_for_examine" 00:06:25.387 } 00:06:25.387 ] 00:06:25.387 } 00:06:25.387 ] 00:06:25.387 } 00:06:25.387 [2024-07-15 07:10:34.291730] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:25.387 [2024-07-15 07:10:34.291855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62607 ] 00:06:25.645 [2024-07-15 07:10:34.429717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.645 [2024-07-15 07:10:34.485075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.645 [2024-07-15 07:10:34.513996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.903  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:25.903 00:06:25.903 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:25.903 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:25.903 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:25.903 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:25.903 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:25.903 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:25.903 07:10:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.470 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:26.470 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:26.470 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.470 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.470 { 00:06:26.470 "subsystems": [ 00:06:26.470 { 00:06:26.470 "subsystem": "bdev", 00:06:26.470 "config": [ 00:06:26.470 { 00:06:26.470 "params": { 00:06:26.470 "trtype": "pcie", 00:06:26.470 "traddr": "0000:00:10.0", 00:06:26.470 "name": "Nvme0" 00:06:26.470 }, 00:06:26.470 "method": "bdev_nvme_attach_controller" 00:06:26.470 }, 00:06:26.470 { 00:06:26.470 "method": "bdev_wait_for_examine" 00:06:26.470 } 00:06:26.470 ] 00:06:26.470 } 00:06:26.470 ] 00:06:26.470 } 00:06:26.729 [2024-07-15 07:10:35.424202] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:26.729 [2024-07-15 07:10:35.424295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62626 ] 00:06:26.729 [2024-07-15 07:10:35.567170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.729 [2024-07-15 07:10:35.623826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.729 [2024-07-15 07:10:35.655657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.988  Copying: 56/56 [kB] (average 54 MBps) 00:06:26.988 00:06:26.988 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:26.988 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:26.988 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.988 07:10:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.248 [2024-07-15 07:10:35.951812] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:27.248 [2024-07-15 07:10:35.951909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62639 ] 00:06:27.248 { 00:06:27.248 "subsystems": [ 00:06:27.248 { 00:06:27.248 "subsystem": "bdev", 00:06:27.248 "config": [ 00:06:27.248 { 00:06:27.248 "params": { 00:06:27.248 "trtype": "pcie", 00:06:27.248 "traddr": "0000:00:10.0", 00:06:27.248 "name": "Nvme0" 00:06:27.248 }, 00:06:27.248 "method": "bdev_nvme_attach_controller" 00:06:27.248 }, 00:06:27.248 { 00:06:27.248 "method": "bdev_wait_for_examine" 00:06:27.248 } 00:06:27.248 ] 00:06:27.248 } 00:06:27.248 ] 00:06:27.248 } 00:06:27.248 [2024-07-15 07:10:36.090342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.248 [2024-07-15 07:10:36.145618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.248 [2024-07-15 07:10:36.174056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.544  Copying: 56/56 [kB] (average 54 MBps) 00:06:27.544 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.544 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.544 [2024-07-15 07:10:36.462806] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:27.544 [2024-07-15 07:10:36.462911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62655 ] 00:06:27.544 { 00:06:27.544 "subsystems": [ 00:06:27.544 { 00:06:27.544 "subsystem": "bdev", 00:06:27.544 "config": [ 00:06:27.544 { 00:06:27.544 "params": { 00:06:27.544 "trtype": "pcie", 00:06:27.544 "traddr": "0000:00:10.0", 00:06:27.544 "name": "Nvme0" 00:06:27.544 }, 00:06:27.544 "method": "bdev_nvme_attach_controller" 00:06:27.544 }, 00:06:27.544 { 00:06:27.544 "method": "bdev_wait_for_examine" 00:06:27.544 } 00:06:27.544 ] 00:06:27.544 } 00:06:27.544 ] 00:06:27.544 } 00:06:27.803 [2024-07-15 07:10:36.592276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.803 [2024-07-15 07:10:36.653079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.803 [2024-07-15 07:10:36.683643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.062  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:28.062 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:28.062 07:10:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.629 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:28.629 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:28.629 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.629 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.629 [2024-07-15 07:10:37.501989] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:28.629 [2024-07-15 07:10:37.502088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62674 ] 00:06:28.629 { 00:06:28.629 "subsystems": [ 00:06:28.629 { 00:06:28.629 "subsystem": "bdev", 00:06:28.629 "config": [ 00:06:28.629 { 00:06:28.629 "params": { 00:06:28.629 "trtype": "pcie", 00:06:28.629 "traddr": "0000:00:10.0", 00:06:28.629 "name": "Nvme0" 00:06:28.629 }, 00:06:28.629 "method": "bdev_nvme_attach_controller" 00:06:28.629 }, 00:06:28.629 { 00:06:28.629 "method": "bdev_wait_for_examine" 00:06:28.629 } 00:06:28.629 ] 00:06:28.629 } 00:06:28.629 ] 00:06:28.629 } 00:06:28.887 [2024-07-15 07:10:37.632792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.887 [2024-07-15 07:10:37.709279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.887 [2024-07-15 07:10:37.738303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.145  Copying: 48/48 [kB] (average 46 MBps) 00:06:29.145 00:06:29.145 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:29.145 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:29.145 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.145 07:10:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.145 [2024-07-15 07:10:38.030878] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:29.145 [2024-07-15 07:10:38.030958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:06:29.145 { 00:06:29.145 "subsystems": [ 00:06:29.145 { 00:06:29.145 "subsystem": "bdev", 00:06:29.145 "config": [ 00:06:29.145 { 00:06:29.145 "params": { 00:06:29.145 "trtype": "pcie", 00:06:29.145 "traddr": "0000:00:10.0", 00:06:29.145 "name": "Nvme0" 00:06:29.145 }, 00:06:29.145 "method": "bdev_nvme_attach_controller" 00:06:29.145 }, 00:06:29.145 { 00:06:29.145 "method": "bdev_wait_for_examine" 00:06:29.145 } 00:06:29.145 ] 00:06:29.145 } 00:06:29.145 ] 00:06:29.145 } 00:06:29.403 [2024-07-15 07:10:38.165248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.403 [2024-07-15 07:10:38.223504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.403 [2024-07-15 07:10:38.252634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.662  Copying: 48/48 [kB] (average 46 MBps) 00:06:29.662 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.662 07:10:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.662 [2024-07-15 07:10:38.557954] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:29.662 [2024-07-15 07:10:38.558299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62703 ] 00:06:29.662 { 00:06:29.662 "subsystems": [ 00:06:29.662 { 00:06:29.662 "subsystem": "bdev", 00:06:29.662 "config": [ 00:06:29.662 { 00:06:29.662 "params": { 00:06:29.662 "trtype": "pcie", 00:06:29.662 "traddr": "0000:00:10.0", 00:06:29.662 "name": "Nvme0" 00:06:29.662 }, 00:06:29.662 "method": "bdev_nvme_attach_controller" 00:06:29.662 }, 00:06:29.662 { 00:06:29.662 "method": "bdev_wait_for_examine" 00:06:29.662 } 00:06:29.662 ] 00:06:29.662 } 00:06:29.662 ] 00:06:29.662 } 00:06:29.920 [2024-07-15 07:10:38.697353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.920 [2024-07-15 07:10:38.755707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.920 [2024-07-15 07:10:38.785241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.179  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:30.179 00:06:30.179 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:30.179 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:30.179 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:30.179 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:30.179 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:30.179 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:30.179 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.744 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:30.744 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:30.744 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.744 07:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.744 [2024-07-15 07:10:39.631885] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:30.744 [2024-07-15 07:10:39.632261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62722 ] 00:06:30.744 { 00:06:30.744 "subsystems": [ 00:06:30.744 { 00:06:30.744 "subsystem": "bdev", 00:06:30.744 "config": [ 00:06:30.744 { 00:06:30.744 "params": { 00:06:30.744 "trtype": "pcie", 00:06:30.744 "traddr": "0000:00:10.0", 00:06:30.744 "name": "Nvme0" 00:06:30.744 }, 00:06:30.744 "method": "bdev_nvme_attach_controller" 00:06:30.744 }, 00:06:30.744 { 00:06:30.744 "method": "bdev_wait_for_examine" 00:06:30.744 } 00:06:30.744 ] 00:06:30.744 } 00:06:30.744 ] 00:06:30.744 } 00:06:31.002 [2024-07-15 07:10:39.770622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.002 [2024-07-15 07:10:39.829413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.002 [2024-07-15 07:10:39.858805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.259  Copying: 48/48 [kB] (average 46 MBps) 00:06:31.259 00:06:31.259 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:31.259 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:31.259 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.259 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.259 [2024-07-15 07:10:40.149476] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:31.259 [2024-07-15 07:10:40.149564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62741 ] 00:06:31.259 { 00:06:31.259 "subsystems": [ 00:06:31.259 { 00:06:31.259 "subsystem": "bdev", 00:06:31.259 "config": [ 00:06:31.259 { 00:06:31.259 "params": { 00:06:31.259 "trtype": "pcie", 00:06:31.259 "traddr": "0000:00:10.0", 00:06:31.259 "name": "Nvme0" 00:06:31.259 }, 00:06:31.259 "method": "bdev_nvme_attach_controller" 00:06:31.259 }, 00:06:31.259 { 00:06:31.260 "method": "bdev_wait_for_examine" 00:06:31.260 } 00:06:31.260 ] 00:06:31.260 } 00:06:31.260 ] 00:06:31.260 } 00:06:31.517 [2024-07-15 07:10:40.283038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.517 [2024-07-15 07:10:40.342147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.517 [2024-07-15 07:10:40.371397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.775  Copying: 48/48 [kB] (average 46 MBps) 00:06:31.775 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.775 07:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 [2024-07-15 07:10:40.677452] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:31.775 [2024-07-15 07:10:40.677558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62751 ] 00:06:31.775 { 00:06:31.775 "subsystems": [ 00:06:31.775 { 00:06:31.775 "subsystem": "bdev", 00:06:31.775 "config": [ 00:06:31.775 { 00:06:31.775 "params": { 00:06:31.775 "trtype": "pcie", 00:06:31.775 "traddr": "0000:00:10.0", 00:06:31.775 "name": "Nvme0" 00:06:31.775 }, 00:06:31.775 "method": "bdev_nvme_attach_controller" 00:06:31.775 }, 00:06:31.775 { 00:06:31.775 "method": "bdev_wait_for_examine" 00:06:31.775 } 00:06:31.776 ] 00:06:31.776 } 00:06:31.776 ] 00:06:31.776 } 00:06:32.034 [2024-07-15 07:10:40.816967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.034 [2024-07-15 07:10:40.875215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.034 [2024-07-15 07:10:40.904252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.292  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:32.292 00:06:32.292 ************************************ 00:06:32.292 END TEST dd_rw 00:06:32.292 ************************************ 00:06:32.292 00:06:32.292 real 0m13.266s 00:06:32.292 user 0m10.190s 00:06:32.292 sys 0m3.741s 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.292 ************************************ 00:06:32.292 START TEST dd_rw_offset 00:06:32.292 ************************************ 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:32.292 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:32.550 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:32.551 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=573kn3tc97xnt19ij1f7848lhkd7u2iqxex2zvsmc9gpxiw5avhrnolqct92hzqzrj4tmmqcgvv88mwqy7gil4r3drgbh1udhdbgerihc8gzlztvzai2ee9bknri1vvwd5hdsyh4i7xphynih7jt0nwu3jzfqg69kf1nupa1fnjdb7o4dpgeub5a3bikw2thspiqt5uowgonppcl9tt4db35o5xwtvodghdpvtnvs41ml0cuwkq7ls989iioav0z58urfrlji6dijy71m81yni3tojjv1vlf166f6sfmilkql0mxbbqu3xdt4mhufnaqi961zxwnnbc07j1hl2bhjehk6308vsjj458zw4ny07pqe2uv8h5887s6e7iuplwzifjh4gqxn98zqxuaq1nnh7b1amqillqu4ssd8l6vlf6u7ocxx1o8rsyusarzg6i75gay5dzkoliaiw4isfgoaawu8a29d1aavlj5mffqzpk54bdosht85vwiasp0nvb79oitia7ajs052hfl0ul2lzrrr3gs6gu1906ttvxk7gnv5zy51gjcy0ksm9r4r069s9slgqbcr56ibdoy4r3jdfm8nrqprvdg6fof5tvrfyu7q3bii4wnfo1bx3rg6peq0zqfngtk22tzg262ugfz5cxr2zv75x8g18mo8c3ia9umzewx66eglydl1r7qxvpz6qhds4ctm8sovoeuwata6oa9nkotnl6bvze14s9ungdef7nz8wq93r1n3ozqaqimx7r8r0kilfv66ygfnxlgi9rphnbjngt70su06xt34tpvcusa7bh9w31kfma6nbyxih13w31r78s9g8s66fr1g4poyu2238geioutejpimsh0sfhvb61p9oryugi8jv4bjdpo7cbxbi9yssdmntuity7949v17f1kt0df9nayii7ao1cca98t4x7kbrwh97jgt3fry9yahinoh5fs6h2opwz0j8nzyuhj1i8n5zf14vib9gbipickiwpbs78nxunumeev5pshg59b2pqplz2vap8m0vxlj57n4bxvglknswbdsukrwp9samnbhvn6yj6skzznidnsq1cypm09bgjendo6oud6a9qrx7pyu1qo389l5cc52cx2090tdf1b4ipa4yhfwfcmvt1ccgln0qglyby0ru6b25c10d2dv8snr8c6gs22n4tjx902k89r6yvmjl9d1780qq0ww2mzac2gmdba5g7exk3spvj7knlaxl91yv9f8f9gnop3lhd21odpt7a79yq4ppf9sqizcapnv4akfn1vxtvui2met4h0f0fvg49jhkp0wyb7aa4jq77yjrqd7y8hy4tvs6z4ha7wfsh64myuzs9r72xnm7m1irln32tckt1vt6d3a6hnwb7x1qdlnkiyfa3l5b58zwzp8fp8doe44i52s91mxsm6xxmb6caup8rzl4ebmzs6xczjize98vu2lvg6x3nt3nzpkj1jqvtv2gmmav0cng8wd9fous7vj9tmnnsx6php5qfrkp9e69bbhjq5cn4r5a3knmbbz41q5ci2kwfrwnwtokt5v4l1pfyivdc0m9r1dp7drj3be8nwdk1iv35iudrryeffx8mw5l7m9blmccrod1yxxaqgsg2z6rprcilsdgg83aj1exab13jsltqjkqa9gzbeu0hsmjm8j40e8dltsu09lmm91lszhtipu3y25estu6zbsai0e0cqv7thfklmvupwyw9y9cy8u5ms402nqrc50ekia6el58tom0b8s5nz1o1axriri9e5bpooieddo9g57x8sn8iwxcdjmyydsq1hhhfq74pa30djq5qu5lazwzcct4jmworzx35tlfddc1fut0rp6rgvdj4ddaouyh5sf8hws262phlnomls71h4ql968zle5kr4niginptjyfcsdvdcasacm4hn9spms9ey5m11sbg5z6xwl0mf7uef5pg6gd46zigxr4q7dmyf7r80zo5swfz76f4l0lg06gvk3cvsiaht37z0f7wihv22gjal3zzxo7kelsxvfxpe6jnbk3ai7ueascc9gfo3r7cv4ektxu7828qvjnubpvhq2jot0tvfmxcfljizhda6ldc3op7t0552ftjit6io4h9574x7r0tz2fftzpia6xz5mc3mi2ij35w2by3wd9sdt0yavb73gj3mmuist9z8vf0o6ssg8cz3l2rf7894bq8a4hwhmbz2y1o1mpnj9qegcbqujkxnehv7jquw68iadkbikgto9qtb41vfxtdz0am9c1hpnydlwxtp9zylwdgc7448b6hrxaucmnbu8ck9d5l13fzn8ynzljou6q04ai9n23pbi5shb8o8xdgr5tax0712qpb94bxc64dlo77w7zu75zu5sffrrl31ktkrsl48a1tcb411pfnwbjpifs29a2be38thi97wgdeab5jd33xqquvcnh8ybp6v3se62kuxti7lf0gz1llhguppks2j9r8cmyv2yvhk4u4gxnmahmab56w71r4jarkptpo1lma5vzpmvtyqt20m1r5n73yahu5im764ocim3kwc8co610okxncverpg23bi4pm0wjgpj5m8xmgf6zgjexr1itf2yzasz8gizutsg6wc67ghiso9tbhnibpmju8b3wmqcdbq3u7q7yi9xhvb3yztlqafyskpp5lh0wu29rt86mjm8jlb51ovgrhahvu4o4vcn5pjipd2tk45pns0tmw3fajmo8kyam2w3gw11txtd1yiyje6f4d3pnis2gzi5qng6il8j2vggmfnih74fy692d6z2ilz2ik7ak3qc3x6mscksefqluffes647ubtu0s7z3chx0j3m2g1ccr39spl8pbr1otwpwcw4pueq0exx21aj5mqr4y3cv0sjaieofwmlp0m7gqv8byd8lppn6545i4ovsu8revaj4dv4nvhds88oa5altg9pr0q4vjibv398dnjfr9p0uee5xaquar87g7bptonz9gcrow1cuxzfo4t5no8vdesyt877kope1rb9383qyg34cbdmujtitl2ij1logki6v16hkk927a3fmr9jv9jhf3oloau78k6m5nz2mlhx8q7ja26sjfy9eioks9xhb1wsk5z9res8k27kfajc23ypbtm2z0sgwv70ztwzfv4x76dgrs10flawstzr70ylv8fgejj8uk22yzdhsw55yzu31wdxnrz4ba862imokxeuyqjxa80sx2wzw9nuds6rrdtsroge2pkhcbvy1x1v0cwyh01tf0d79uym7v6pj2k1qx8gaw9mqwpn5ztyntwiwflorlqkbht6h97cvevrltknmjo9eifa9vqwkf6qsy02dea3eln9rvw7nlu348awpoqt7pu9ze0gemdsuzrgf6fb125681r9s5t19wzl4hwhr5iwqbokz8bkadk4e1zn6rptrhvzsntrcx16bklwo18hpini8psb18mtdvnpxfdwexypfr6v4gpzpg9knf3ipqlra93yi2jowwmyzqnsl7vy4ksc1baf5tam40s61a6lxg4n9l1dcf851n2rxg23vidthzgsrx3yedje91yf3th5nj52ocyfwk0510i6qlqeimu2vjdy4dom0w3k2u7r15pfq2vpcfuvphjm77iapltknx6ep2s1aq80ofj9miwqarta5ti5772n2dl9gy5kkfrctvulwven74wlw4avbclakbgma2xclb2o2v4w0anfreclbno1kgv78p5e9twjnznond0aqgrbgfsih541gwwykwasunip5e35i51v9422vpgjifpggg9qrtojldoqusp90nbr5935xovlvfndtur1m8h2yg2eevdfz58nfnp8s4enu3qetexfo78ze2ab7m9jw0whsukji1mkhmuog5yirxczxpfzp9853m9jailb6cmyqdcjgctxi66qoihjb3b9yml2cpr390i360dsslvim1g0sp4llkf34sseiq91rzy27bf15emnpokbpuj5s0ugk32jkybv32n53d797oqkz1vfevnyruswn0csk49 00:06:32.551 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:32.551 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:32.551 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:32.551 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:32.551 [2024-07-15 07:10:41.294160] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:32.551 [2024-07-15 07:10:41.294249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:06:32.551 { 00:06:32.551 "subsystems": [ 00:06:32.551 { 00:06:32.551 "subsystem": "bdev", 00:06:32.551 "config": [ 00:06:32.551 { 00:06:32.551 "params": { 00:06:32.551 "trtype": "pcie", 00:06:32.551 "traddr": "0000:00:10.0", 00:06:32.551 "name": "Nvme0" 00:06:32.551 }, 00:06:32.551 "method": "bdev_nvme_attach_controller" 00:06:32.551 }, 00:06:32.551 { 00:06:32.551 "method": "bdev_wait_for_examine" 00:06:32.551 } 00:06:32.551 ] 00:06:32.551 } 00:06:32.551 ] 00:06:32.551 } 00:06:32.551 [2024-07-15 07:10:41.428168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.551 [2024-07-15 07:10:41.498552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.809 [2024-07-15 07:10:41.533831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.067  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:33.067 00:06:33.067 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:33.067 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:33.067 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:33.067 07:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:33.067 [2024-07-15 07:10:41.827519] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:33.067 [2024-07-15 07:10:41.827604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62795 ] 00:06:33.067 { 00:06:33.067 "subsystems": [ 00:06:33.067 { 00:06:33.067 "subsystem": "bdev", 00:06:33.067 "config": [ 00:06:33.067 { 00:06:33.067 "params": { 00:06:33.067 "trtype": "pcie", 00:06:33.067 "traddr": "0000:00:10.0", 00:06:33.067 "name": "Nvme0" 00:06:33.067 }, 00:06:33.067 "method": "bdev_nvme_attach_controller" 00:06:33.067 }, 00:06:33.067 { 00:06:33.067 "method": "bdev_wait_for_examine" 00:06:33.067 } 00:06:33.067 ] 00:06:33.067 } 00:06:33.067 ] 00:06:33.067 } 00:06:33.067 [2024-07-15 07:10:41.960199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.067 [2024-07-15 07:10:42.019029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.324 [2024-07-15 07:10:42.049288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.584  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:33.584 00:06:33.584 07:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:33.584 07:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 573kn3tc97xnt19ij1f7848lhkd7u2iqxex2zvsmc9gpxiw5avhrnolqct92hzqzrj4tmmqcgvv88mwqy7gil4r3drgbh1udhdbgerihc8gzlztvzai2ee9bknri1vvwd5hdsyh4i7xphynih7jt0nwu3jzfqg69kf1nupa1fnjdb7o4dpgeub5a3bikw2thspiqt5uowgonppcl9tt4db35o5xwtvodghdpvtnvs41ml0cuwkq7ls989iioav0z58urfrlji6dijy71m81yni3tojjv1vlf166f6sfmilkql0mxbbqu3xdt4mhufnaqi961zxwnnbc07j1hl2bhjehk6308vsjj458zw4ny07pqe2uv8h5887s6e7iuplwzifjh4gqxn98zqxuaq1nnh7b1amqillqu4ssd8l6vlf6u7ocxx1o8rsyusarzg6i75gay5dzkoliaiw4isfgoaawu8a29d1aavlj5mffqzpk54bdosht85vwiasp0nvb79oitia7ajs052hfl0ul2lzrrr3gs6gu1906ttvxk7gnv5zy51gjcy0ksm9r4r069s9slgqbcr56ibdoy4r3jdfm8nrqprvdg6fof5tvrfyu7q3bii4wnfo1bx3rg6peq0zqfngtk22tzg262ugfz5cxr2zv75x8g18mo8c3ia9umzewx66eglydl1r7qxvpz6qhds4ctm8sovoeuwata6oa9nkotnl6bvze14s9ungdef7nz8wq93r1n3ozqaqimx7r8r0kilfv66ygfnxlgi9rphnbjngt70su06xt34tpvcusa7bh9w31kfma6nbyxih13w31r78s9g8s66fr1g4poyu2238geioutejpimsh0sfhvb61p9oryugi8jv4bjdpo7cbxbi9yssdmntuity7949v17f1kt0df9nayii7ao1cca98t4x7kbrwh97jgt3fry9yahinoh5fs6h2opwz0j8nzyuhj1i8n5zf14vib9gbipickiwpbs78nxunumeev5pshg59b2pqplz2vap8m0vxlj57n4bxvglknswbdsukrwp9samnbhvn6yj6skzznidnsq1cypm09bgjendo6oud6a9qrx7pyu1qo389l5cc52cx2090tdf1b4ipa4yhfwfcmvt1ccgln0qglyby0ru6b25c10d2dv8snr8c6gs22n4tjx902k89r6yvmjl9d1780qq0ww2mzac2gmdba5g7exk3spvj7knlaxl91yv9f8f9gnop3lhd21odpt7a79yq4ppf9sqizcapnv4akfn1vxtvui2met4h0f0fvg49jhkp0wyb7aa4jq77yjrqd7y8hy4tvs6z4ha7wfsh64myuzs9r72xnm7m1irln32tckt1vt6d3a6hnwb7x1qdlnkiyfa3l5b58zwzp8fp8doe44i52s91mxsm6xxmb6caup8rzl4ebmzs6xczjize98vu2lvg6x3nt3nzpkj1jqvtv2gmmav0cng8wd9fous7vj9tmnnsx6php5qfrkp9e69bbhjq5cn4r5a3knmbbz41q5ci2kwfrwnwtokt5v4l1pfyivdc0m9r1dp7drj3be8nwdk1iv35iudrryeffx8mw5l7m9blmccrod1yxxaqgsg2z6rprcilsdgg83aj1exab13jsltqjkqa9gzbeu0hsmjm8j40e8dltsu09lmm91lszhtipu3y25estu6zbsai0e0cqv7thfklmvupwyw9y9cy8u5ms402nqrc50ekia6el58tom0b8s5nz1o1axriri9e5bpooieddo9g57x8sn8iwxcdjmyydsq1hhhfq74pa30djq5qu5lazwzcct4jmworzx35tlfddc1fut0rp6rgvdj4ddaouyh5sf8hws262phlnomls71h4ql968zle5kr4niginptjyfcsdvdcasacm4hn9spms9ey5m11sbg5z6xwl0mf7uef5pg6gd46zigxr4q7dmyf7r80zo5swfz76f4l0lg06gvk3cvsiaht37z0f7wihv22gjal3zzxo7kelsxvfxpe6jnbk3ai7ueascc9gfo3r7cv4ektxu7828qvjnubpvhq2jot0tvfmxcfljizhda6ldc3op7t0552ftjit6io4h9574x7r0tz2fftzpia6xz5mc3mi2ij35w2by3wd9sdt0yavb73gj3mmuist9z8vf0o6ssg8cz3l2rf7894bq8a4hwhmbz2y1o1mpnj9qegcbqujkxnehv7jquw68iadkbikgto9qtb41vfxtdz0am9c1hpnydlwxtp9zylwdgc7448b6hrxaucmnbu8ck9d5l13fzn8ynzljou6q04ai9n23pbi5shb8o8xdgr5tax0712qpb94bxc64dlo77w7zu75zu5sffrrl31ktkrsl48a1tcb411pfnwbjpifs29a2be38thi97wgdeab5jd33xqquvcnh8ybp6v3se62kuxti7lf0gz1llhguppks2j9r8cmyv2yvhk4u4gxnmahmab56w71r4jarkptpo1lma5vzpmvtyqt20m1r5n73yahu5im764ocim3kwc8co610okxncverpg23bi4pm0wjgpj5m8xmgf6zgjexr1itf2yzasz8gizutsg6wc67ghiso9tbhnibpmju8b3wmqcdbq3u7q7yi9xhvb3yztlqafyskpp5lh0wu29rt86mjm8jlb51ovgrhahvu4o4vcn5pjipd2tk45pns0tmw3fajmo8kyam2w3gw11txtd1yiyje6f4d3pnis2gzi5qng6il8j2vggmfnih74fy692d6z2ilz2ik7ak3qc3x6mscksefqluffes647ubtu0s7z3chx0j3m2g1ccr39spl8pbr1otwpwcw4pueq0exx21aj5mqr4y3cv0sjaieofwmlp0m7gqv8byd8lppn6545i4ovsu8revaj4dv4nvhds88oa5altg9pr0q4vjibv398dnjfr9p0uee5xaquar87g7bptonz9gcrow1cuxzfo4t5no8vdesyt877kope1rb9383qyg34cbdmujtitl2ij1logki6v16hkk927a3fmr9jv9jhf3oloau78k6m5nz2mlhx8q7ja26sjfy9eioks9xhb1wsk5z9res8k27kfajc23ypbtm2z0sgwv70ztwzfv4x76dgrs10flawstzr70ylv8fgejj8uk22yzdhsw55yzu31wdxnrz4ba862imokxeuyqjxa80sx2wzw9nuds6rrdtsroge2pkhcbvy1x1v0cwyh01tf0d79uym7v6pj2k1qx8gaw9mqwpn5ztyntwiwflorlqkbht6h97cvevrltknmjo9eifa9vqwkf6qsy02dea3eln9rvw7nlu348awpoqt7pu9ze0gemdsuzrgf6fb125681r9s5t19wzl4hwhr5iwqbokz8bkadk4e1zn6rptrhvzsntrcx16bklwo18hpini8psb18mtdvnpxfdwexypfr6v4gpzpg9knf3ipqlra93yi2jowwmyzqnsl7vy4ksc1baf5tam40s61a6lxg4n9l1dcf851n2rxg23vidthzgsrx3yedje91yf3th5nj52ocyfwk0510i6qlqeimu2vjdy4dom0w3k2u7r15pfq2vpcfuvphjm77iapltknx6ep2s1aq80ofj9miwqarta5ti5772n2dl9gy5kkfrctvulwven74wlw4avbclakbgma2xclb2o2v4w0anfreclbno1kgv78p5e9twjnznond0aqgrbgfsih541gwwykwasunip5e35i51v9422vpgjifpggg9qrtojldoqusp90nbr5935xovlvfndtur1m8h2yg2eevdfz58nfnp8s4enu3qetexfo78ze2ab7m9jw0whsukji1mkhmuog5yirxczxpfzp9853m9jailb6cmyqdcjgctxi66qoihjb3b9yml2cpr390i360dsslvim1g0sp4llkf34sseiq91rzy27bf15emnpokbpuj5s0ugk32jkybv32n53d797oqkz1vfevnyruswn0csk49 == \5\7\3\k\n\3\t\c\9\7\x\n\t\1\9\i\j\1\f\7\8\4\8\l\h\k\d\7\u\2\i\q\x\e\x\2\z\v\s\m\c\9\g\p\x\i\w\5\a\v\h\r\n\o\l\q\c\t\9\2\h\z\q\z\r\j\4\t\m\m\q\c\g\v\v\8\8\m\w\q\y\7\g\i\l\4\r\3\d\r\g\b\h\1\u\d\h\d\b\g\e\r\i\h\c\8\g\z\l\z\t\v\z\a\i\2\e\e\9\b\k\n\r\i\1\v\v\w\d\5\h\d\s\y\h\4\i\7\x\p\h\y\n\i\h\7\j\t\0\n\w\u\3\j\z\f\q\g\6\9\k\f\1\n\u\p\a\1\f\n\j\d\b\7\o\4\d\p\g\e\u\b\5\a\3\b\i\k\w\2\t\h\s\p\i\q\t\5\u\o\w\g\o\n\p\p\c\l\9\t\t\4\d\b\3\5\o\5\x\w\t\v\o\d\g\h\d\p\v\t\n\v\s\4\1\m\l\0\c\u\w\k\q\7\l\s\9\8\9\i\i\o\a\v\0\z\5\8\u\r\f\r\l\j\i\6\d\i\j\y\7\1\m\8\1\y\n\i\3\t\o\j\j\v\1\v\l\f\1\6\6\f\6\s\f\m\i\l\k\q\l\0\m\x\b\b\q\u\3\x\d\t\4\m\h\u\f\n\a\q\i\9\6\1\z\x\w\n\n\b\c\0\7\j\1\h\l\2\b\h\j\e\h\k\6\3\0\8\v\s\j\j\4\5\8\z\w\4\n\y\0\7\p\q\e\2\u\v\8\h\5\8\8\7\s\6\e\7\i\u\p\l\w\z\i\f\j\h\4\g\q\x\n\9\8\z\q\x\u\a\q\1\n\n\h\7\b\1\a\m\q\i\l\l\q\u\4\s\s\d\8\l\6\v\l\f\6\u\7\o\c\x\x\1\o\8\r\s\y\u\s\a\r\z\g\6\i\7\5\g\a\y\5\d\z\k\o\l\i\a\i\w\4\i\s\f\g\o\a\a\w\u\8\a\2\9\d\1\a\a\v\l\j\5\m\f\f\q\z\p\k\5\4\b\d\o\s\h\t\8\5\v\w\i\a\s\p\0\n\v\b\7\9\o\i\t\i\a\7\a\j\s\0\5\2\h\f\l\0\u\l\2\l\z\r\r\r\3\g\s\6\g\u\1\9\0\6\t\t\v\x\k\7\g\n\v\5\z\y\5\1\g\j\c\y\0\k\s\m\9\r\4\r\0\6\9\s\9\s\l\g\q\b\c\r\5\6\i\b\d\o\y\4\r\3\j\d\f\m\8\n\r\q\p\r\v\d\g\6\f\o\f\5\t\v\r\f\y\u\7\q\3\b\i\i\4\w\n\f\o\1\b\x\3\r\g\6\p\e\q\0\z\q\f\n\g\t\k\2\2\t\z\g\2\6\2\u\g\f\z\5\c\x\r\2\z\v\7\5\x\8\g\1\8\m\o\8\c\3\i\a\9\u\m\z\e\w\x\6\6\e\g\l\y\d\l\1\r\7\q\x\v\p\z\6\q\h\d\s\4\c\t\m\8\s\o\v\o\e\u\w\a\t\a\6\o\a\9\n\k\o\t\n\l\6\b\v\z\e\1\4\s\9\u\n\g\d\e\f\7\n\z\8\w\q\9\3\r\1\n\3\o\z\q\a\q\i\m\x\7\r\8\r\0\k\i\l\f\v\6\6\y\g\f\n\x\l\g\i\9\r\p\h\n\b\j\n\g\t\7\0\s\u\0\6\x\t\3\4\t\p\v\c\u\s\a\7\b\h\9\w\3\1\k\f\m\a\6\n\b\y\x\i\h\1\3\w\3\1\r\7\8\s\9\g\8\s\6\6\f\r\1\g\4\p\o\y\u\2\2\3\8\g\e\i\o\u\t\e\j\p\i\m\s\h\0\s\f\h\v\b\6\1\p\9\o\r\y\u\g\i\8\j\v\4\b\j\d\p\o\7\c\b\x\b\i\9\y\s\s\d\m\n\t\u\i\t\y\7\9\4\9\v\1\7\f\1\k\t\0\d\f\9\n\a\y\i\i\7\a\o\1\c\c\a\9\8\t\4\x\7\k\b\r\w\h\9\7\j\g\t\3\f\r\y\9\y\a\h\i\n\o\h\5\f\s\6\h\2\o\p\w\z\0\j\8\n\z\y\u\h\j\1\i\8\n\5\z\f\1\4\v\i\b\9\g\b\i\p\i\c\k\i\w\p\b\s\7\8\n\x\u\n\u\m\e\e\v\5\p\s\h\g\5\9\b\2\p\q\p\l\z\2\v\a\p\8\m\0\v\x\l\j\5\7\n\4\b\x\v\g\l\k\n\s\w\b\d\s\u\k\r\w\p\9\s\a\m\n\b\h\v\n\6\y\j\6\s\k\z\z\n\i\d\n\s\q\1\c\y\p\m\0\9\b\g\j\e\n\d\o\6\o\u\d\6\a\9\q\r\x\7\p\y\u\1\q\o\3\8\9\l\5\c\c\5\2\c\x\2\0\9\0\t\d\f\1\b\4\i\p\a\4\y\h\f\w\f\c\m\v\t\1\c\c\g\l\n\0\q\g\l\y\b\y\0\r\u\6\b\2\5\c\1\0\d\2\d\v\8\s\n\r\8\c\6\g\s\2\2\n\4\t\j\x\9\0\2\k\8\9\r\6\y\v\m\j\l\9\d\1\7\8\0\q\q\0\w\w\2\m\z\a\c\2\g\m\d\b\a\5\g\7\e\x\k\3\s\p\v\j\7\k\n\l\a\x\l\9\1\y\v\9\f\8\f\9\g\n\o\p\3\l\h\d\2\1\o\d\p\t\7\a\7\9\y\q\4\p\p\f\9\s\q\i\z\c\a\p\n\v\4\a\k\f\n\1\v\x\t\v\u\i\2\m\e\t\4\h\0\f\0\f\v\g\4\9\j\h\k\p\0\w\y\b\7\a\a\4\j\q\7\7\y\j\r\q\d\7\y\8\h\y\4\t\v\s\6\z\4\h\a\7\w\f\s\h\6\4\m\y\u\z\s\9\r\7\2\x\n\m\7\m\1\i\r\l\n\3\2\t\c\k\t\1\v\t\6\d\3\a\6\h\n\w\b\7\x\1\q\d\l\n\k\i\y\f\a\3\l\5\b\5\8\z\w\z\p\8\f\p\8\d\o\e\4\4\i\5\2\s\9\1\m\x\s\m\6\x\x\m\b\6\c\a\u\p\8\r\z\l\4\e\b\m\z\s\6\x\c\z\j\i\z\e\9\8\v\u\2\l\v\g\6\x\3\n\t\3\n\z\p\k\j\1\j\q\v\t\v\2\g\m\m\a\v\0\c\n\g\8\w\d\9\f\o\u\s\7\v\j\9\t\m\n\n\s\x\6\p\h\p\5\q\f\r\k\p\9\e\6\9\b\b\h\j\q\5\c\n\4\r\5\a\3\k\n\m\b\b\z\4\1\q\5\c\i\2\k\w\f\r\w\n\w\t\o\k\t\5\v\4\l\1\p\f\y\i\v\d\c\0\m\9\r\1\d\p\7\d\r\j\3\b\e\8\n\w\d\k\1\i\v\3\5\i\u\d\r\r\y\e\f\f\x\8\m\w\5\l\7\m\9\b\l\m\c\c\r\o\d\1\y\x\x\a\q\g\s\g\2\z\6\r\p\r\c\i\l\s\d\g\g\8\3\a\j\1\e\x\a\b\1\3\j\s\l\t\q\j\k\q\a\9\g\z\b\e\u\0\h\s\m\j\m\8\j\4\0\e\8\d\l\t\s\u\0\9\l\m\m\9\1\l\s\z\h\t\i\p\u\3\y\2\5\e\s\t\u\6\z\b\s\a\i\0\e\0\c\q\v\7\t\h\f\k\l\m\v\u\p\w\y\w\9\y\9\c\y\8\u\5\m\s\4\0\2\n\q\r\c\5\0\e\k\i\a\6\e\l\5\8\t\o\m\0\b\8\s\5\n\z\1\o\1\a\x\r\i\r\i\9\e\5\b\p\o\o\i\e\d\d\o\9\g\5\7\x\8\s\n\8\i\w\x\c\d\j\m\y\y\d\s\q\1\h\h\h\f\q\7\4\p\a\3\0\d\j\q\5\q\u\5\l\a\z\w\z\c\c\t\4\j\m\w\o\r\z\x\3\5\t\l\f\d\d\c\1\f\u\t\0\r\p\6\r\g\v\d\j\4\d\d\a\o\u\y\h\5\s\f\8\h\w\s\2\6\2\p\h\l\n\o\m\l\s\7\1\h\4\q\l\9\6\8\z\l\e\5\k\r\4\n\i\g\i\n\p\t\j\y\f\c\s\d\v\d\c\a\s\a\c\m\4\h\n\9\s\p\m\s\9\e\y\5\m\1\1\s\b\g\5\z\6\x\w\l\0\m\f\7\u\e\f\5\p\g\6\g\d\4\6\z\i\g\x\r\4\q\7\d\m\y\f\7\r\8\0\z\o\5\s\w\f\z\7\6\f\4\l\0\l\g\0\6\g\v\k\3\c\v\s\i\a\h\t\3\7\z\0\f\7\w\i\h\v\2\2\g\j\a\l\3\z\z\x\o\7\k\e\l\s\x\v\f\x\p\e\6\j\n\b\k\3\a\i\7\u\e\a\s\c\c\9\g\f\o\3\r\7\c\v\4\e\k\t\x\u\7\8\2\8\q\v\j\n\u\b\p\v\h\q\2\j\o\t\0\t\v\f\m\x\c\f\l\j\i\z\h\d\a\6\l\d\c\3\o\p\7\t\0\5\5\2\f\t\j\i\t\6\i\o\4\h\9\5\7\4\x\7\r\0\t\z\2\f\f\t\z\p\i\a\6\x\z\5\m\c\3\m\i\2\i\j\3\5\w\2\b\y\3\w\d\9\s\d\t\0\y\a\v\b\7\3\g\j\3\m\m\u\i\s\t\9\z\8\v\f\0\o\6\s\s\g\8\c\z\3\l\2\r\f\7\8\9\4\b\q\8\a\4\h\w\h\m\b\z\2\y\1\o\1\m\p\n\j\9\q\e\g\c\b\q\u\j\k\x\n\e\h\v\7\j\q\u\w\6\8\i\a\d\k\b\i\k\g\t\o\9\q\t\b\4\1\v\f\x\t\d\z\0\a\m\9\c\1\h\p\n\y\d\l\w\x\t\p\9\z\y\l\w\d\g\c\7\4\4\8\b\6\h\r\x\a\u\c\m\n\b\u\8\c\k\9\d\5\l\1\3\f\z\n\8\y\n\z\l\j\o\u\6\q\0\4\a\i\9\n\2\3\p\b\i\5\s\h\b\8\o\8\x\d\g\r\5\t\a\x\0\7\1\2\q\p\b\9\4\b\x\c\6\4\d\l\o\7\7\w\7\z\u\7\5\z\u\5\s\f\f\r\r\l\3\1\k\t\k\r\s\l\4\8\a\1\t\c\b\4\1\1\p\f\n\w\b\j\p\i\f\s\2\9\a\2\b\e\3\8\t\h\i\9\7\w\g\d\e\a\b\5\j\d\3\3\x\q\q\u\v\c\n\h\8\y\b\p\6\v\3\s\e\6\2\k\u\x\t\i\7\l\f\0\g\z\1\l\l\h\g\u\p\p\k\s\2\j\9\r\8\c\m\y\v\2\y\v\h\k\4\u\4\g\x\n\m\a\h\m\a\b\5\6\w\7\1\r\4\j\a\r\k\p\t\p\o\1\l\m\a\5\v\z\p\m\v\t\y\q\t\2\0\m\1\r\5\n\7\3\y\a\h\u\5\i\m\7\6\4\o\c\i\m\3\k\w\c\8\c\o\6\1\0\o\k\x\n\c\v\e\r\p\g\2\3\b\i\4\p\m\0\w\j\g\p\j\5\m\8\x\m\g\f\6\z\g\j\e\x\r\1\i\t\f\2\y\z\a\s\z\8\g\i\z\u\t\s\g\6\w\c\6\7\g\h\i\s\o\9\t\b\h\n\i\b\p\m\j\u\8\b\3\w\m\q\c\d\b\q\3\u\7\q\7\y\i\9\x\h\v\b\3\y\z\t\l\q\a\f\y\s\k\p\p\5\l\h\0\w\u\2\9\r\t\8\6\m\j\m\8\j\l\b\5\1\o\v\g\r\h\a\h\v\u\4\o\4\v\c\n\5\p\j\i\p\d\2\t\k\4\5\p\n\s\0\t\m\w\3\f\a\j\m\o\8\k\y\a\m\2\w\3\g\w\1\1\t\x\t\d\1\y\i\y\j\e\6\f\4\d\3\p\n\i\s\2\g\z\i\5\q\n\g\6\i\l\8\j\2\v\g\g\m\f\n\i\h\7\4\f\y\6\9\2\d\6\z\2\i\l\z\2\i\k\7\a\k\3\q\c\3\x\6\m\s\c\k\s\e\f\q\l\u\f\f\e\s\6\4\7\u\b\t\u\0\s\7\z\3\c\h\x\0\j\3\m\2\g\1\c\c\r\3\9\s\p\l\8\p\b\r\1\o\t\w\p\w\c\w\4\p\u\e\q\0\e\x\x\2\1\a\j\5\m\q\r\4\y\3\c\v\0\s\j\a\i\e\o\f\w\m\l\p\0\m\7\g\q\v\8\b\y\d\8\l\p\p\n\6\5\4\5\i\4\o\v\s\u\8\r\e\v\a\j\4\d\v\4\n\v\h\d\s\8\8\o\a\5\a\l\t\g\9\p\r\0\q\4\v\j\i\b\v\3\9\8\d\n\j\f\r\9\p\0\u\e\e\5\x\a\q\u\a\r\8\7\g\7\b\p\t\o\n\z\9\g\c\r\o\w\1\c\u\x\z\f\o\4\t\5\n\o\8\v\d\e\s\y\t\8\7\7\k\o\p\e\1\r\b\9\3\8\3\q\y\g\3\4\c\b\d\m\u\j\t\i\t\l\2\i\j\1\l\o\g\k\i\6\v\1\6\h\k\k\9\2\7\a\3\f\m\r\9\j\v\9\j\h\f\3\o\l\o\a\u\7\8\k\6\m\5\n\z\2\m\l\h\x\8\q\7\j\a\2\6\s\j\f\y\9\e\i\o\k\s\9\x\h\b\1\w\s\k\5\z\9\r\e\s\8\k\2\7\k\f\a\j\c\2\3\y\p\b\t\m\2\z\0\s\g\w\v\7\0\z\t\w\z\f\v\4\x\7\6\d\g\r\s\1\0\f\l\a\w\s\t\z\r\7\0\y\l\v\8\f\g\e\j\j\8\u\k\2\2\y\z\d\h\s\w\5\5\y\z\u\3\1\w\d\x\n\r\z\4\b\a\8\6\2\i\m\o\k\x\e\u\y\q\j\x\a\8\0\s\x\2\w\z\w\9\n\u\d\s\6\r\r\d\t\s\r\o\g\e\2\p\k\h\c\b\v\y\1\x\1\v\0\c\w\y\h\0\1\t\f\0\d\7\9\u\y\m\7\v\6\p\j\2\k\1\q\x\8\g\a\w\9\m\q\w\p\n\5\z\t\y\n\t\w\i\w\f\l\o\r\l\q\k\b\h\t\6\h\9\7\c\v\e\v\r\l\t\k\n\m\j\o\9\e\i\f\a\9\v\q\w\k\f\6\q\s\y\0\2\d\e\a\3\e\l\n\9\r\v\w\7\n\l\u\3\4\8\a\w\p\o\q\t\7\p\u\9\z\e\0\g\e\m\d\s\u\z\r\g\f\6\f\b\1\2\5\6\8\1\r\9\s\5\t\1\9\w\z\l\4\h\w\h\r\5\i\w\q\b\o\k\z\8\b\k\a\d\k\4\e\1\z\n\6\r\p\t\r\h\v\z\s\n\t\r\c\x\1\6\b\k\l\w\o\1\8\h\p\i\n\i\8\p\s\b\1\8\m\t\d\v\n\p\x\f\d\w\e\x\y\p\f\r\6\v\4\g\p\z\p\g\9\k\n\f\3\i\p\q\l\r\a\9\3\y\i\2\j\o\w\w\m\y\z\q\n\s\l\7\v\y\4\k\s\c\1\b\a\f\5\t\a\m\4\0\s\6\1\a\6\l\x\g\4\n\9\l\1\d\c\f\8\5\1\n\2\r\x\g\2\3\v\i\d\t\h\z\g\s\r\x\3\y\e\d\j\e\9\1\y\f\3\t\h\5\n\j\5\2\o\c\y\f\w\k\0\5\1\0\i\6\q\l\q\e\i\m\u\2\v\j\d\y\4\d\o\m\0\w\3\k\2\u\7\r\1\5\p\f\q\2\v\p\c\f\u\v\p\h\j\m\7\7\i\a\p\l\t\k\n\x\6\e\p\2\s\1\a\q\8\0\o\f\j\9\m\i\w\q\a\r\t\a\5\t\i\5\7\7\2\n\2\d\l\9\g\y\5\k\k\f\r\c\t\v\u\l\w\v\e\n\7\4\w\l\w\4\a\v\b\c\l\a\k\b\g\m\a\2\x\c\l\b\2\o\2\v\4\w\0\a\n\f\r\e\c\l\b\n\o\1\k\g\v\7\8\p\5\e\9\t\w\j\n\z\n\o\n\d\0\a\q\g\r\b\g\f\s\i\h\5\4\1\g\w\w\y\k\w\a\s\u\n\i\p\5\e\3\5\i\5\1\v\9\4\2\2\v\p\g\j\i\f\p\g\g\g\9\q\r\t\o\j\l\d\o\q\u\s\p\9\0\n\b\r\5\9\3\5\x\o\v\l\v\f\n\d\t\u\r\1\m\8\h\2\y\g\2\e\e\v\d\f\z\5\8\n\f\n\p\8\s\4\e\n\u\3\q\e\t\e\x\f\o\7\8\z\e\2\a\b\7\m\9\j\w\0\w\h\s\u\k\j\i\1\m\k\h\m\u\o\g\5\y\i\r\x\c\z\x\p\f\z\p\9\8\5\3\m\9\j\a\i\l\b\6\c\m\y\q\d\c\j\g\c\t\x\i\6\6\q\o\i\h\j\b\3\b\9\y\m\l\2\c\p\r\3\9\0\i\3\6\0\d\s\s\l\v\i\m\1\g\0\s\p\4\l\l\k\f\3\4\s\s\e\i\q\9\1\r\z\y\2\7\b\f\1\5\e\m\n\p\o\k\b\p\u\j\5\s\0\u\g\k\3\2\j\k\y\b\v\3\2\n\5\3\d\7\9\7\o\q\k\z\1\v\f\e\v\n\y\r\u\s\w\n\0\c\s\k\4\9 ]] 00:06:33.584 ************************************ 00:06:33.584 END TEST dd_rw_offset 00:06:33.584 ************************************ 00:06:33.584 00:06:33.584 real 0m1.103s 00:06:33.584 user 0m0.791s 00:06:33.584 sys 0m0.382s 00:06:33.584 07:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.585 07:10:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 { 00:06:33.585 "subsystems": [ 00:06:33.585 { 00:06:33.585 "subsystem": "bdev", 00:06:33.585 "config": [ 00:06:33.585 { 00:06:33.585 "params": { 00:06:33.585 "trtype": "pcie", 00:06:33.585 "traddr": "0000:00:10.0", 00:06:33.585 "name": "Nvme0" 00:06:33.585 }, 00:06:33.585 "method": "bdev_nvme_attach_controller" 00:06:33.585 }, 00:06:33.585 { 00:06:33.585 "method": "bdev_wait_for_examine" 00:06:33.585 } 00:06:33.585 ] 00:06:33.585 } 00:06:33.585 ] 00:06:33.585 } 00:06:33.585 [2024-07-15 07:10:42.395196] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:33.585 [2024-07-15 07:10:42.395293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62830 ] 00:06:33.843 [2024-07-15 07:10:42.535810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.843 [2024-07-15 07:10:42.608444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.843 [2024-07-15 07:10:42.644010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.103  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:34.103 00:06:34.103 07:10:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.103 ************************************ 00:06:34.103 END TEST spdk_dd_basic_rw 00:06:34.103 ************************************ 00:06:34.103 00:06:34.103 real 0m15.927s 00:06:34.103 user 0m11.948s 00:06:34.103 sys 0m4.591s 00:06:34.103 07:10:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.103 07:10:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.103 07:10:42 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:34.103 07:10:42 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:34.103 07:10:42 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.103 07:10:42 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.103 07:10:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:34.103 ************************************ 00:06:34.103 START TEST spdk_dd_posix 00:06:34.103 ************************************ 00:06:34.103 07:10:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:34.103 * Looking for test storage... 00:06:34.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.103 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:34.362 * First test run, liburing in use 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.362 ************************************ 00:06:34.362 START TEST dd_flag_append 00:06:34.362 ************************************ 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=nffyb102kh53auwmfwqv049omt1fvgrp 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=8smt5kf0j94crhnlj15lcgyksobz3rfy 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s nffyb102kh53auwmfwqv049omt1fvgrp 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 8smt5kf0j94crhnlj15lcgyksobz3rfy 00:06:34.362 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:34.362 [2024-07-15 07:10:43.128001] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:34.362 [2024-07-15 07:10:43.128150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62888 ] 00:06:34.362 [2024-07-15 07:10:43.266555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.620 [2024-07-15 07:10:43.330588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.620 [2024-07-15 07:10:43.362953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.620  Copying: 32/32 [B] (average 31 kBps) 00:06:34.620 00:06:34.620 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 8smt5kf0j94crhnlj15lcgyksobz3rfynffyb102kh53auwmfwqv049omt1fvgrp == \8\s\m\t\5\k\f\0\j\9\4\c\r\h\n\l\j\1\5\l\c\g\y\k\s\o\b\z\3\r\f\y\n\f\f\y\b\1\0\2\k\h\5\3\a\u\w\m\f\w\q\v\0\4\9\o\m\t\1\f\v\g\r\p ]] 00:06:34.620 00:06:34.620 real 0m0.468s 00:06:34.620 user 0m0.250s 00:06:34.620 sys 0m0.181s 00:06:34.620 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.620 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:34.620 ************************************ 00:06:34.620 END TEST dd_flag_append 00:06:34.620 ************************************ 00:06:34.620 07:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:34.620 07:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.879 ************************************ 00:06:34.879 START TEST dd_flag_directory 00:06:34.879 ************************************ 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.879 07:10:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.879 [2024-07-15 07:10:43.637718] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:34.879 [2024-07-15 07:10:43.637815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62917 ] 00:06:34.879 [2024-07-15 07:10:43.776136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.138 [2024-07-15 07:10:43.845592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.138 [2024-07-15 07:10:43.877778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.138 [2024-07-15 07:10:43.897681] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:35.138 [2024-07-15 07:10:43.897752] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:35.138 [2024-07-15 07:10:43.897781] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.138 [2024-07-15 07:10:43.964574] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.138 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:35.397 [2024-07-15 07:10:44.121189] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:35.397 [2024-07-15 07:10:44.121306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62921 ] 00:06:35.397 [2024-07-15 07:10:44.262790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.397 [2024-07-15 07:10:44.321954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.656 [2024-07-15 07:10:44.351110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.656 [2024-07-15 07:10:44.368465] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:35.656 [2024-07-15 07:10:44.368521] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:35.656 [2024-07-15 07:10:44.368535] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.656 [2024-07-15 07:10:44.430288] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.656 00:06:35.656 real 0m0.934s 00:06:35.656 user 0m0.535s 00:06:35.656 sys 0m0.189s 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.656 ************************************ 00:06:35.656 END TEST dd_flag_directory 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 ************************************ 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 ************************************ 00:06:35.656 START TEST dd_flag_nofollow 00:06:35.656 ************************************ 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.656 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.914 [2024-07-15 07:10:44.613118] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:35.914 [2024-07-15 07:10:44.613209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62955 ] 00:06:35.914 [2024-07-15 07:10:44.744627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.914 [2024-07-15 07:10:44.803807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.914 [2024-07-15 07:10:44.832931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.914 [2024-07-15 07:10:44.850220] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.914 [2024-07-15 07:10:44.850281] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.914 [2024-07-15 07:10:44.850296] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.188 [2024-07-15 07:10:44.913946] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.188 07:10:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.188 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.188 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.188 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:36.188 [2024-07-15 07:10:45.062774] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:36.188 [2024-07-15 07:10:45.062909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62959 ] 00:06:36.454 [2024-07-15 07:10:45.205484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.454 [2024-07-15 07:10:45.264882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.454 [2024-07-15 07:10:45.294324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.454 [2024-07-15 07:10:45.311736] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:36.454 [2024-07-15 07:10:45.311791] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:36.454 [2024-07-15 07:10:45.311807] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.454 [2024-07-15 07:10:45.374159] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:36.713 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.713 [2024-07-15 07:10:45.522333] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:36.713 [2024-07-15 07:10:45.522433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62973 ] 00:06:36.713 [2024-07-15 07:10:45.659996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.971 [2024-07-15 07:10:45.719768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.971 [2024-07-15 07:10:45.749215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.971  Copying: 512/512 [B] (average 500 kBps) 00:06:36.971 00:06:36.971 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ hrxcpvt4hehol64cv0flfzyli7tqy0orq4wpkkreyidw8t62po66rcly6umetewz61kfffz4bf8go7v9g5wm7xy8ooexpi4ap5pk8i91yrme5fvmlz9m0v5rb4xlbs75w2luu1r8tqo1uwgypw1uas1choxozst39qagq7zvo8u8t9raeebp6d6cmabzij2iu7tq4yw1sm0mviwc4b4yrks5j5541377a23cuun7fyqhcdh1qjqae6v4xq90wks834ahnx2lxibin5dn146a7np84qqwnge6vgdmbxf1tt6mgcf3zbxxjpe2fgnmifgm5zm3b06nytm387a55amo62w33d5ic2k43bu56cz75oo4yetqzklx58f2dkux9oth9t14jx6fprpezlsdh0v1km07pov9jsvca928z04iymvb2b91g73lh7vowshj2mny4igofka9hgseewe5hl3klk0sqy52eeb6lz2wr982kb2p21zvfa1727od58im0meq == \h\r\x\c\p\v\t\4\h\e\h\o\l\6\4\c\v\0\f\l\f\z\y\l\i\7\t\q\y\0\o\r\q\4\w\p\k\k\r\e\y\i\d\w\8\t\6\2\p\o\6\6\r\c\l\y\6\u\m\e\t\e\w\z\6\1\k\f\f\f\z\4\b\f\8\g\o\7\v\9\g\5\w\m\7\x\y\8\o\o\e\x\p\i\4\a\p\5\p\k\8\i\9\1\y\r\m\e\5\f\v\m\l\z\9\m\0\v\5\r\b\4\x\l\b\s\7\5\w\2\l\u\u\1\r\8\t\q\o\1\u\w\g\y\p\w\1\u\a\s\1\c\h\o\x\o\z\s\t\3\9\q\a\g\q\7\z\v\o\8\u\8\t\9\r\a\e\e\b\p\6\d\6\c\m\a\b\z\i\j\2\i\u\7\t\q\4\y\w\1\s\m\0\m\v\i\w\c\4\b\4\y\r\k\s\5\j\5\5\4\1\3\7\7\a\2\3\c\u\u\n\7\f\y\q\h\c\d\h\1\q\j\q\a\e\6\v\4\x\q\9\0\w\k\s\8\3\4\a\h\n\x\2\l\x\i\b\i\n\5\d\n\1\4\6\a\7\n\p\8\4\q\q\w\n\g\e\6\v\g\d\m\b\x\f\1\t\t\6\m\g\c\f\3\z\b\x\x\j\p\e\2\f\g\n\m\i\f\g\m\5\z\m\3\b\0\6\n\y\t\m\3\8\7\a\5\5\a\m\o\6\2\w\3\3\d\5\i\c\2\k\4\3\b\u\5\6\c\z\7\5\o\o\4\y\e\t\q\z\k\l\x\5\8\f\2\d\k\u\x\9\o\t\h\9\t\1\4\j\x\6\f\p\r\p\e\z\l\s\d\h\0\v\1\k\m\0\7\p\o\v\9\j\s\v\c\a\9\2\8\z\0\4\i\y\m\v\b\2\b\9\1\g\7\3\l\h\7\v\o\w\s\h\j\2\m\n\y\4\i\g\o\f\k\a\9\h\g\s\e\e\w\e\5\h\l\3\k\l\k\0\s\q\y\5\2\e\e\b\6\l\z\2\w\r\9\8\2\k\b\2\p\2\1\z\v\f\a\1\7\2\7\o\d\5\8\i\m\0\m\e\q ]] 00:06:36.971 00:06:36.971 real 0m1.359s 00:06:36.971 user 0m0.752s 00:06:36.971 sys 0m0.367s 00:06:36.971 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.971 ************************************ 00:06:36.971 END TEST dd_flag_nofollow 00:06:36.971 ************************************ 00:06:36.971 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:37.229 ************************************ 00:06:37.229 START TEST dd_flag_noatime 00:06:37.229 ************************************ 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721027445 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721027445 00:06:37.229 07:10:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:38.164 07:10:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.164 [2024-07-15 07:10:47.045648] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:38.164 [2024-07-15 07:10:47.045738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63010 ] 00:06:38.423 [2024-07-15 07:10:47.187248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.423 [2024-07-15 07:10:47.264289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.423 [2024-07-15 07:10:47.298368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.682  Copying: 512/512 [B] (average 500 kBps) 00:06:38.682 00:06:38.683 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.683 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721027445 )) 00:06:38.683 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.683 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721027445 )) 00:06:38.683 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.683 [2024-07-15 07:10:47.521991] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:38.683 [2024-07-15 07:10:47.522082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63029 ] 00:06:38.942 [2024-07-15 07:10:47.651499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.942 [2024-07-15 07:10:47.707512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.942 [2024-07-15 07:10:47.736057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.201  Copying: 512/512 [B] (average 500 kBps) 00:06:39.201 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.201 ************************************ 00:06:39.201 END TEST dd_flag_noatime 00:06:39.201 ************************************ 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721027447 )) 00:06:39.201 00:06:39.201 real 0m1.947s 00:06:39.201 user 0m0.518s 00:06:39.201 sys 0m0.367s 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.201 ************************************ 00:06:39.201 START TEST dd_flags_misc 00:06:39.201 ************************************ 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.201 07:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:39.201 [2024-07-15 07:10:48.031492] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:39.201 [2024-07-15 07:10:48.031644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63052 ] 00:06:39.460 [2024-07-15 07:10:48.174527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.461 [2024-07-15 07:10:48.236356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.461 [2024-07-15 07:10:48.268016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.719  Copying: 512/512 [B] (average 500 kBps) 00:06:39.720 00:06:39.720 07:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fch4gfdq8uilszcxt7h9kev7osxxato26mumwa22j8b58ah0g9skzgdp3x6w9yj68xs6yplqhkf0vhodhszvp0ro6rvcj01pak3p9m0uftx080lp13vx58bu3hg6ot3jkz8stsjql47afxjzo4c27f8b558nbytsjjwbcjuj8rfq42wvv14zg4icwupprt3gom6q29nnchqgom45culo4mvvux5ltprm5z9ddxrj0k1jv7dkoxp867ckjw68i3tc3zgxtxvdc7j3wzzeeuydyhah01zr5oo6p9fne0uq64ym4qeqmfbikozlaf7tw76adqrr287c25zxnskm8xlp4i30kc1t1ktlyvf142c7763ck4hpat75huy5cc0ajetqr4plfdfosvz36dp5ytwzl4lj07kldz5em6msu2jv8jlmdw3agswcdtzibh03uoe69yk3tt4527c4i2mpen574vc5zi0siql9ks0zqqtoyyz2v788clmzgonvsvo0raca == \f\c\h\4\g\f\d\q\8\u\i\l\s\z\c\x\t\7\h\9\k\e\v\7\o\s\x\x\a\t\o\2\6\m\u\m\w\a\2\2\j\8\b\5\8\a\h\0\g\9\s\k\z\g\d\p\3\x\6\w\9\y\j\6\8\x\s\6\y\p\l\q\h\k\f\0\v\h\o\d\h\s\z\v\p\0\r\o\6\r\v\c\j\0\1\p\a\k\3\p\9\m\0\u\f\t\x\0\8\0\l\p\1\3\v\x\5\8\b\u\3\h\g\6\o\t\3\j\k\z\8\s\t\s\j\q\l\4\7\a\f\x\j\z\o\4\c\2\7\f\8\b\5\5\8\n\b\y\t\s\j\j\w\b\c\j\u\j\8\r\f\q\4\2\w\v\v\1\4\z\g\4\i\c\w\u\p\p\r\t\3\g\o\m\6\q\2\9\n\n\c\h\q\g\o\m\4\5\c\u\l\o\4\m\v\v\u\x\5\l\t\p\r\m\5\z\9\d\d\x\r\j\0\k\1\j\v\7\d\k\o\x\p\8\6\7\c\k\j\w\6\8\i\3\t\c\3\z\g\x\t\x\v\d\c\7\j\3\w\z\z\e\e\u\y\d\y\h\a\h\0\1\z\r\5\o\o\6\p\9\f\n\e\0\u\q\6\4\y\m\4\q\e\q\m\f\b\i\k\o\z\l\a\f\7\t\w\7\6\a\d\q\r\r\2\8\7\c\2\5\z\x\n\s\k\m\8\x\l\p\4\i\3\0\k\c\1\t\1\k\t\l\y\v\f\1\4\2\c\7\7\6\3\c\k\4\h\p\a\t\7\5\h\u\y\5\c\c\0\a\j\e\t\q\r\4\p\l\f\d\f\o\s\v\z\3\6\d\p\5\y\t\w\z\l\4\l\j\0\7\k\l\d\z\5\e\m\6\m\s\u\2\j\v\8\j\l\m\d\w\3\a\g\s\w\c\d\t\z\i\b\h\0\3\u\o\e\6\9\y\k\3\t\t\4\5\2\7\c\4\i\2\m\p\e\n\5\7\4\v\c\5\z\i\0\s\i\q\l\9\k\s\0\z\q\q\t\o\y\y\z\2\v\7\8\8\c\l\m\z\g\o\n\v\s\v\o\0\r\a\c\a ]] 00:06:39.720 07:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.720 07:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:39.720 [2024-07-15 07:10:48.500029] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:39.720 [2024-07-15 07:10:48.500212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63067 ] 00:06:39.720 [2024-07-15 07:10:48.639255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.979 [2024-07-15 07:10:48.699424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.979 [2024-07-15 07:10:48.729728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.979  Copying: 512/512 [B] (average 500 kBps) 00:06:39.979 00:06:39.979 07:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fch4gfdq8uilszcxt7h9kev7osxxato26mumwa22j8b58ah0g9skzgdp3x6w9yj68xs6yplqhkf0vhodhszvp0ro6rvcj01pak3p9m0uftx080lp13vx58bu3hg6ot3jkz8stsjql47afxjzo4c27f8b558nbytsjjwbcjuj8rfq42wvv14zg4icwupprt3gom6q29nnchqgom45culo4mvvux5ltprm5z9ddxrj0k1jv7dkoxp867ckjw68i3tc3zgxtxvdc7j3wzzeeuydyhah01zr5oo6p9fne0uq64ym4qeqmfbikozlaf7tw76adqrr287c25zxnskm8xlp4i30kc1t1ktlyvf142c7763ck4hpat75huy5cc0ajetqr4plfdfosvz36dp5ytwzl4lj07kldz5em6msu2jv8jlmdw3agswcdtzibh03uoe69yk3tt4527c4i2mpen574vc5zi0siql9ks0zqqtoyyz2v788clmzgonvsvo0raca == \f\c\h\4\g\f\d\q\8\u\i\l\s\z\c\x\t\7\h\9\k\e\v\7\o\s\x\x\a\t\o\2\6\m\u\m\w\a\2\2\j\8\b\5\8\a\h\0\g\9\s\k\z\g\d\p\3\x\6\w\9\y\j\6\8\x\s\6\y\p\l\q\h\k\f\0\v\h\o\d\h\s\z\v\p\0\r\o\6\r\v\c\j\0\1\p\a\k\3\p\9\m\0\u\f\t\x\0\8\0\l\p\1\3\v\x\5\8\b\u\3\h\g\6\o\t\3\j\k\z\8\s\t\s\j\q\l\4\7\a\f\x\j\z\o\4\c\2\7\f\8\b\5\5\8\n\b\y\t\s\j\j\w\b\c\j\u\j\8\r\f\q\4\2\w\v\v\1\4\z\g\4\i\c\w\u\p\p\r\t\3\g\o\m\6\q\2\9\n\n\c\h\q\g\o\m\4\5\c\u\l\o\4\m\v\v\u\x\5\l\t\p\r\m\5\z\9\d\d\x\r\j\0\k\1\j\v\7\d\k\o\x\p\8\6\7\c\k\j\w\6\8\i\3\t\c\3\z\g\x\t\x\v\d\c\7\j\3\w\z\z\e\e\u\y\d\y\h\a\h\0\1\z\r\5\o\o\6\p\9\f\n\e\0\u\q\6\4\y\m\4\q\e\q\m\f\b\i\k\o\z\l\a\f\7\t\w\7\6\a\d\q\r\r\2\8\7\c\2\5\z\x\n\s\k\m\8\x\l\p\4\i\3\0\k\c\1\t\1\k\t\l\y\v\f\1\4\2\c\7\7\6\3\c\k\4\h\p\a\t\7\5\h\u\y\5\c\c\0\a\j\e\t\q\r\4\p\l\f\d\f\o\s\v\z\3\6\d\p\5\y\t\w\z\l\4\l\j\0\7\k\l\d\z\5\e\m\6\m\s\u\2\j\v\8\j\l\m\d\w\3\a\g\s\w\c\d\t\z\i\b\h\0\3\u\o\e\6\9\y\k\3\t\t\4\5\2\7\c\4\i\2\m\p\e\n\5\7\4\v\c\5\z\i\0\s\i\q\l\9\k\s\0\z\q\q\t\o\y\y\z\2\v\7\8\8\c\l\m\z\g\o\n\v\s\v\o\0\r\a\c\a ]] 00:06:39.979 07:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.979 07:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:40.238 [2024-07-15 07:10:48.953942] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:40.238 [2024-07-15 07:10:48.954042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63071 ] 00:06:40.238 [2024-07-15 07:10:49.093790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.238 [2024-07-15 07:10:49.154891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.238 [2024-07-15 07:10:49.186782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.496  Copying: 512/512 [B] (average 166 kBps) 00:06:40.496 00:06:40.497 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fch4gfdq8uilszcxt7h9kev7osxxato26mumwa22j8b58ah0g9skzgdp3x6w9yj68xs6yplqhkf0vhodhszvp0ro6rvcj01pak3p9m0uftx080lp13vx58bu3hg6ot3jkz8stsjql47afxjzo4c27f8b558nbytsjjwbcjuj8rfq42wvv14zg4icwupprt3gom6q29nnchqgom45culo4mvvux5ltprm5z9ddxrj0k1jv7dkoxp867ckjw68i3tc3zgxtxvdc7j3wzzeeuydyhah01zr5oo6p9fne0uq64ym4qeqmfbikozlaf7tw76adqrr287c25zxnskm8xlp4i30kc1t1ktlyvf142c7763ck4hpat75huy5cc0ajetqr4plfdfosvz36dp5ytwzl4lj07kldz5em6msu2jv8jlmdw3agswcdtzibh03uoe69yk3tt4527c4i2mpen574vc5zi0siql9ks0zqqtoyyz2v788clmzgonvsvo0raca == \f\c\h\4\g\f\d\q\8\u\i\l\s\z\c\x\t\7\h\9\k\e\v\7\o\s\x\x\a\t\o\2\6\m\u\m\w\a\2\2\j\8\b\5\8\a\h\0\g\9\s\k\z\g\d\p\3\x\6\w\9\y\j\6\8\x\s\6\y\p\l\q\h\k\f\0\v\h\o\d\h\s\z\v\p\0\r\o\6\r\v\c\j\0\1\p\a\k\3\p\9\m\0\u\f\t\x\0\8\0\l\p\1\3\v\x\5\8\b\u\3\h\g\6\o\t\3\j\k\z\8\s\t\s\j\q\l\4\7\a\f\x\j\z\o\4\c\2\7\f\8\b\5\5\8\n\b\y\t\s\j\j\w\b\c\j\u\j\8\r\f\q\4\2\w\v\v\1\4\z\g\4\i\c\w\u\p\p\r\t\3\g\o\m\6\q\2\9\n\n\c\h\q\g\o\m\4\5\c\u\l\o\4\m\v\v\u\x\5\l\t\p\r\m\5\z\9\d\d\x\r\j\0\k\1\j\v\7\d\k\o\x\p\8\6\7\c\k\j\w\6\8\i\3\t\c\3\z\g\x\t\x\v\d\c\7\j\3\w\z\z\e\e\u\y\d\y\h\a\h\0\1\z\r\5\o\o\6\p\9\f\n\e\0\u\q\6\4\y\m\4\q\e\q\m\f\b\i\k\o\z\l\a\f\7\t\w\7\6\a\d\q\r\r\2\8\7\c\2\5\z\x\n\s\k\m\8\x\l\p\4\i\3\0\k\c\1\t\1\k\t\l\y\v\f\1\4\2\c\7\7\6\3\c\k\4\h\p\a\t\7\5\h\u\y\5\c\c\0\a\j\e\t\q\r\4\p\l\f\d\f\o\s\v\z\3\6\d\p\5\y\t\w\z\l\4\l\j\0\7\k\l\d\z\5\e\m\6\m\s\u\2\j\v\8\j\l\m\d\w\3\a\g\s\w\c\d\t\z\i\b\h\0\3\u\o\e\6\9\y\k\3\t\t\4\5\2\7\c\4\i\2\m\p\e\n\5\7\4\v\c\5\z\i\0\s\i\q\l\9\k\s\0\z\q\q\t\o\y\y\z\2\v\7\8\8\c\l\m\z\g\o\n\v\s\v\o\0\r\a\c\a ]] 00:06:40.497 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.497 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:40.497 [2024-07-15 07:10:49.413656] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:40.497 [2024-07-15 07:10:49.413745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63085 ] 00:06:40.756 [2024-07-15 07:10:49.550706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.756 [2024-07-15 07:10:49.610903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.756 [2024-07-15 07:10:49.642793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.015  Copying: 512/512 [B] (average 250 kBps) 00:06:41.015 00:06:41.015 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fch4gfdq8uilszcxt7h9kev7osxxato26mumwa22j8b58ah0g9skzgdp3x6w9yj68xs6yplqhkf0vhodhszvp0ro6rvcj01pak3p9m0uftx080lp13vx58bu3hg6ot3jkz8stsjql47afxjzo4c27f8b558nbytsjjwbcjuj8rfq42wvv14zg4icwupprt3gom6q29nnchqgom45culo4mvvux5ltprm5z9ddxrj0k1jv7dkoxp867ckjw68i3tc3zgxtxvdc7j3wzzeeuydyhah01zr5oo6p9fne0uq64ym4qeqmfbikozlaf7tw76adqrr287c25zxnskm8xlp4i30kc1t1ktlyvf142c7763ck4hpat75huy5cc0ajetqr4plfdfosvz36dp5ytwzl4lj07kldz5em6msu2jv8jlmdw3agswcdtzibh03uoe69yk3tt4527c4i2mpen574vc5zi0siql9ks0zqqtoyyz2v788clmzgonvsvo0raca == \f\c\h\4\g\f\d\q\8\u\i\l\s\z\c\x\t\7\h\9\k\e\v\7\o\s\x\x\a\t\o\2\6\m\u\m\w\a\2\2\j\8\b\5\8\a\h\0\g\9\s\k\z\g\d\p\3\x\6\w\9\y\j\6\8\x\s\6\y\p\l\q\h\k\f\0\v\h\o\d\h\s\z\v\p\0\r\o\6\r\v\c\j\0\1\p\a\k\3\p\9\m\0\u\f\t\x\0\8\0\l\p\1\3\v\x\5\8\b\u\3\h\g\6\o\t\3\j\k\z\8\s\t\s\j\q\l\4\7\a\f\x\j\z\o\4\c\2\7\f\8\b\5\5\8\n\b\y\t\s\j\j\w\b\c\j\u\j\8\r\f\q\4\2\w\v\v\1\4\z\g\4\i\c\w\u\p\p\r\t\3\g\o\m\6\q\2\9\n\n\c\h\q\g\o\m\4\5\c\u\l\o\4\m\v\v\u\x\5\l\t\p\r\m\5\z\9\d\d\x\r\j\0\k\1\j\v\7\d\k\o\x\p\8\6\7\c\k\j\w\6\8\i\3\t\c\3\z\g\x\t\x\v\d\c\7\j\3\w\z\z\e\e\u\y\d\y\h\a\h\0\1\z\r\5\o\o\6\p\9\f\n\e\0\u\q\6\4\y\m\4\q\e\q\m\f\b\i\k\o\z\l\a\f\7\t\w\7\6\a\d\q\r\r\2\8\7\c\2\5\z\x\n\s\k\m\8\x\l\p\4\i\3\0\k\c\1\t\1\k\t\l\y\v\f\1\4\2\c\7\7\6\3\c\k\4\h\p\a\t\7\5\h\u\y\5\c\c\0\a\j\e\t\q\r\4\p\l\f\d\f\o\s\v\z\3\6\d\p\5\y\t\w\z\l\4\l\j\0\7\k\l\d\z\5\e\m\6\m\s\u\2\j\v\8\j\l\m\d\w\3\a\g\s\w\c\d\t\z\i\b\h\0\3\u\o\e\6\9\y\k\3\t\t\4\5\2\7\c\4\i\2\m\p\e\n\5\7\4\v\c\5\z\i\0\s\i\q\l\9\k\s\0\z\q\q\t\o\y\y\z\2\v\7\8\8\c\l\m\z\g\o\n\v\s\v\o\0\r\a\c\a ]] 00:06:41.015 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:41.015 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:41.015 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:41.015 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:41.015 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.015 07:10:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:41.015 [2024-07-15 07:10:49.883795] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:41.015 [2024-07-15 07:10:49.883897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63090 ] 00:06:41.274 [2024-07-15 07:10:50.020635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.274 [2024-07-15 07:10:50.082293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.274 [2024-07-15 07:10:50.111785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.533  Copying: 512/512 [B] (average 500 kBps) 00:06:41.533 00:06:41.533 07:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnvsrname6e39fjc1108fgjntrv1zthpxt5msbvc9v0e3cfct6dgvkpt3do8pfd70daoqizm9wg5ljy2su8mqj98f56q8ydner2ydf9edjnavfb6yygv0glp5w4v1t6htjqesq2ii3n7il1rt5wgdt6djr1aicrxuzluc9vm1wy1tl1w8rhngv1jleimlq3we3s8v3gllkc8bin9ibp0uwhpcxknlzemmup57ud87ijsot39i1q0hbztiv77lnlbizmh97mfkc7nhwumtyg72yavhn1g8ti10tcqckboxh5e9lxv4bih1jdwpkth8q8g13p8yvoy5xttga62o66wzucxde8ywl17ovtgvu8q98hapnvfm9zbsx54t11ihf807veuob8xnplk9hyqxwwv5i8gkfqk39wf0fydbo3mif9r8jrdtjj9otygl2g9187724q1socoyvx52kwy1ao8qahtz9exbc0n1mmqwuw50r3sip96vxq9vqw223x31uz2 == \x\n\v\s\r\n\a\m\e\6\e\3\9\f\j\c\1\1\0\8\f\g\j\n\t\r\v\1\z\t\h\p\x\t\5\m\s\b\v\c\9\v\0\e\3\c\f\c\t\6\d\g\v\k\p\t\3\d\o\8\p\f\d\7\0\d\a\o\q\i\z\m\9\w\g\5\l\j\y\2\s\u\8\m\q\j\9\8\f\5\6\q\8\y\d\n\e\r\2\y\d\f\9\e\d\j\n\a\v\f\b\6\y\y\g\v\0\g\l\p\5\w\4\v\1\t\6\h\t\j\q\e\s\q\2\i\i\3\n\7\i\l\1\r\t\5\w\g\d\t\6\d\j\r\1\a\i\c\r\x\u\z\l\u\c\9\v\m\1\w\y\1\t\l\1\w\8\r\h\n\g\v\1\j\l\e\i\m\l\q\3\w\e\3\s\8\v\3\g\l\l\k\c\8\b\i\n\9\i\b\p\0\u\w\h\p\c\x\k\n\l\z\e\m\m\u\p\5\7\u\d\8\7\i\j\s\o\t\3\9\i\1\q\0\h\b\z\t\i\v\7\7\l\n\l\b\i\z\m\h\9\7\m\f\k\c\7\n\h\w\u\m\t\y\g\7\2\y\a\v\h\n\1\g\8\t\i\1\0\t\c\q\c\k\b\o\x\h\5\e\9\l\x\v\4\b\i\h\1\j\d\w\p\k\t\h\8\q\8\g\1\3\p\8\y\v\o\y\5\x\t\t\g\a\6\2\o\6\6\w\z\u\c\x\d\e\8\y\w\l\1\7\o\v\t\g\v\u\8\q\9\8\h\a\p\n\v\f\m\9\z\b\s\x\5\4\t\1\1\i\h\f\8\0\7\v\e\u\o\b\8\x\n\p\l\k\9\h\y\q\x\w\w\v\5\i\8\g\k\f\q\k\3\9\w\f\0\f\y\d\b\o\3\m\i\f\9\r\8\j\r\d\t\j\j\9\o\t\y\g\l\2\g\9\1\8\7\7\2\4\q\1\s\o\c\o\y\v\x\5\2\k\w\y\1\a\o\8\q\a\h\t\z\9\e\x\b\c\0\n\1\m\m\q\w\u\w\5\0\r\3\s\i\p\9\6\v\x\q\9\v\q\w\2\2\3\x\3\1\u\z\2 ]] 00:06:41.533 07:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.533 07:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:41.533 [2024-07-15 07:10:50.342188] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:41.533 [2024-07-15 07:10:50.342282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63100 ] 00:06:41.533 [2024-07-15 07:10:50.480831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.792 [2024-07-15 07:10:50.541141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.792 [2024-07-15 07:10:50.571517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.792  Copying: 512/512 [B] (average 500 kBps) 00:06:41.792 00:06:42.052 07:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnvsrname6e39fjc1108fgjntrv1zthpxt5msbvc9v0e3cfct6dgvkpt3do8pfd70daoqizm9wg5ljy2su8mqj98f56q8ydner2ydf9edjnavfb6yygv0glp5w4v1t6htjqesq2ii3n7il1rt5wgdt6djr1aicrxuzluc9vm1wy1tl1w8rhngv1jleimlq3we3s8v3gllkc8bin9ibp0uwhpcxknlzemmup57ud87ijsot39i1q0hbztiv77lnlbizmh97mfkc7nhwumtyg72yavhn1g8ti10tcqckboxh5e9lxv4bih1jdwpkth8q8g13p8yvoy5xttga62o66wzucxde8ywl17ovtgvu8q98hapnvfm9zbsx54t11ihf807veuob8xnplk9hyqxwwv5i8gkfqk39wf0fydbo3mif9r8jrdtjj9otygl2g9187724q1socoyvx52kwy1ao8qahtz9exbc0n1mmqwuw50r3sip96vxq9vqw223x31uz2 == \x\n\v\s\r\n\a\m\e\6\e\3\9\f\j\c\1\1\0\8\f\g\j\n\t\r\v\1\z\t\h\p\x\t\5\m\s\b\v\c\9\v\0\e\3\c\f\c\t\6\d\g\v\k\p\t\3\d\o\8\p\f\d\7\0\d\a\o\q\i\z\m\9\w\g\5\l\j\y\2\s\u\8\m\q\j\9\8\f\5\6\q\8\y\d\n\e\r\2\y\d\f\9\e\d\j\n\a\v\f\b\6\y\y\g\v\0\g\l\p\5\w\4\v\1\t\6\h\t\j\q\e\s\q\2\i\i\3\n\7\i\l\1\r\t\5\w\g\d\t\6\d\j\r\1\a\i\c\r\x\u\z\l\u\c\9\v\m\1\w\y\1\t\l\1\w\8\r\h\n\g\v\1\j\l\e\i\m\l\q\3\w\e\3\s\8\v\3\g\l\l\k\c\8\b\i\n\9\i\b\p\0\u\w\h\p\c\x\k\n\l\z\e\m\m\u\p\5\7\u\d\8\7\i\j\s\o\t\3\9\i\1\q\0\h\b\z\t\i\v\7\7\l\n\l\b\i\z\m\h\9\7\m\f\k\c\7\n\h\w\u\m\t\y\g\7\2\y\a\v\h\n\1\g\8\t\i\1\0\t\c\q\c\k\b\o\x\h\5\e\9\l\x\v\4\b\i\h\1\j\d\w\p\k\t\h\8\q\8\g\1\3\p\8\y\v\o\y\5\x\t\t\g\a\6\2\o\6\6\w\z\u\c\x\d\e\8\y\w\l\1\7\o\v\t\g\v\u\8\q\9\8\h\a\p\n\v\f\m\9\z\b\s\x\5\4\t\1\1\i\h\f\8\0\7\v\e\u\o\b\8\x\n\p\l\k\9\h\y\q\x\w\w\v\5\i\8\g\k\f\q\k\3\9\w\f\0\f\y\d\b\o\3\m\i\f\9\r\8\j\r\d\t\j\j\9\o\t\y\g\l\2\g\9\1\8\7\7\2\4\q\1\s\o\c\o\y\v\x\5\2\k\w\y\1\a\o\8\q\a\h\t\z\9\e\x\b\c\0\n\1\m\m\q\w\u\w\5\0\r\3\s\i\p\9\6\v\x\q\9\v\q\w\2\2\3\x\3\1\u\z\2 ]] 00:06:42.052 07:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.052 07:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:42.052 [2024-07-15 07:10:50.795862] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:42.052 [2024-07-15 07:10:50.795968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63109 ] 00:06:42.052 [2024-07-15 07:10:50.934412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.052 [2024-07-15 07:10:50.995508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.311 [2024-07-15 07:10:51.028487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.311  Copying: 512/512 [B] (average 166 kBps) 00:06:42.311 00:06:42.311 07:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnvsrname6e39fjc1108fgjntrv1zthpxt5msbvc9v0e3cfct6dgvkpt3do8pfd70daoqizm9wg5ljy2su8mqj98f56q8ydner2ydf9edjnavfb6yygv0glp5w4v1t6htjqesq2ii3n7il1rt5wgdt6djr1aicrxuzluc9vm1wy1tl1w8rhngv1jleimlq3we3s8v3gllkc8bin9ibp0uwhpcxknlzemmup57ud87ijsot39i1q0hbztiv77lnlbizmh97mfkc7nhwumtyg72yavhn1g8ti10tcqckboxh5e9lxv4bih1jdwpkth8q8g13p8yvoy5xttga62o66wzucxde8ywl17ovtgvu8q98hapnvfm9zbsx54t11ihf807veuob8xnplk9hyqxwwv5i8gkfqk39wf0fydbo3mif9r8jrdtjj9otygl2g9187724q1socoyvx52kwy1ao8qahtz9exbc0n1mmqwuw50r3sip96vxq9vqw223x31uz2 == \x\n\v\s\r\n\a\m\e\6\e\3\9\f\j\c\1\1\0\8\f\g\j\n\t\r\v\1\z\t\h\p\x\t\5\m\s\b\v\c\9\v\0\e\3\c\f\c\t\6\d\g\v\k\p\t\3\d\o\8\p\f\d\7\0\d\a\o\q\i\z\m\9\w\g\5\l\j\y\2\s\u\8\m\q\j\9\8\f\5\6\q\8\y\d\n\e\r\2\y\d\f\9\e\d\j\n\a\v\f\b\6\y\y\g\v\0\g\l\p\5\w\4\v\1\t\6\h\t\j\q\e\s\q\2\i\i\3\n\7\i\l\1\r\t\5\w\g\d\t\6\d\j\r\1\a\i\c\r\x\u\z\l\u\c\9\v\m\1\w\y\1\t\l\1\w\8\r\h\n\g\v\1\j\l\e\i\m\l\q\3\w\e\3\s\8\v\3\g\l\l\k\c\8\b\i\n\9\i\b\p\0\u\w\h\p\c\x\k\n\l\z\e\m\m\u\p\5\7\u\d\8\7\i\j\s\o\t\3\9\i\1\q\0\h\b\z\t\i\v\7\7\l\n\l\b\i\z\m\h\9\7\m\f\k\c\7\n\h\w\u\m\t\y\g\7\2\y\a\v\h\n\1\g\8\t\i\1\0\t\c\q\c\k\b\o\x\h\5\e\9\l\x\v\4\b\i\h\1\j\d\w\p\k\t\h\8\q\8\g\1\3\p\8\y\v\o\y\5\x\t\t\g\a\6\2\o\6\6\w\z\u\c\x\d\e\8\y\w\l\1\7\o\v\t\g\v\u\8\q\9\8\h\a\p\n\v\f\m\9\z\b\s\x\5\4\t\1\1\i\h\f\8\0\7\v\e\u\o\b\8\x\n\p\l\k\9\h\y\q\x\w\w\v\5\i\8\g\k\f\q\k\3\9\w\f\0\f\y\d\b\o\3\m\i\f\9\r\8\j\r\d\t\j\j\9\o\t\y\g\l\2\g\9\1\8\7\7\2\4\q\1\s\o\c\o\y\v\x\5\2\k\w\y\1\a\o\8\q\a\h\t\z\9\e\x\b\c\0\n\1\m\m\q\w\u\w\5\0\r\3\s\i\p\9\6\v\x\q\9\v\q\w\2\2\3\x\3\1\u\z\2 ]] 00:06:42.311 07:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.311 07:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:42.311 [2024-07-15 07:10:51.259657] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:42.311 [2024-07-15 07:10:51.259764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63113 ] 00:06:42.570 [2024-07-15 07:10:51.399022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.570 [2024-07-15 07:10:51.462978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.570 [2024-07-15 07:10:51.495752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.830  Copying: 512/512 [B] (average 250 kBps) 00:06:42.830 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnvsrname6e39fjc1108fgjntrv1zthpxt5msbvc9v0e3cfct6dgvkpt3do8pfd70daoqizm9wg5ljy2su8mqj98f56q8ydner2ydf9edjnavfb6yygv0glp5w4v1t6htjqesq2ii3n7il1rt5wgdt6djr1aicrxuzluc9vm1wy1tl1w8rhngv1jleimlq3we3s8v3gllkc8bin9ibp0uwhpcxknlzemmup57ud87ijsot39i1q0hbztiv77lnlbizmh97mfkc7nhwumtyg72yavhn1g8ti10tcqckboxh5e9lxv4bih1jdwpkth8q8g13p8yvoy5xttga62o66wzucxde8ywl17ovtgvu8q98hapnvfm9zbsx54t11ihf807veuob8xnplk9hyqxwwv5i8gkfqk39wf0fydbo3mif9r8jrdtjj9otygl2g9187724q1socoyvx52kwy1ao8qahtz9exbc0n1mmqwuw50r3sip96vxq9vqw223x31uz2 == \x\n\v\s\r\n\a\m\e\6\e\3\9\f\j\c\1\1\0\8\f\g\j\n\t\r\v\1\z\t\h\p\x\t\5\m\s\b\v\c\9\v\0\e\3\c\f\c\t\6\d\g\v\k\p\t\3\d\o\8\p\f\d\7\0\d\a\o\q\i\z\m\9\w\g\5\l\j\y\2\s\u\8\m\q\j\9\8\f\5\6\q\8\y\d\n\e\r\2\y\d\f\9\e\d\j\n\a\v\f\b\6\y\y\g\v\0\g\l\p\5\w\4\v\1\t\6\h\t\j\q\e\s\q\2\i\i\3\n\7\i\l\1\r\t\5\w\g\d\t\6\d\j\r\1\a\i\c\r\x\u\z\l\u\c\9\v\m\1\w\y\1\t\l\1\w\8\r\h\n\g\v\1\j\l\e\i\m\l\q\3\w\e\3\s\8\v\3\g\l\l\k\c\8\b\i\n\9\i\b\p\0\u\w\h\p\c\x\k\n\l\z\e\m\m\u\p\5\7\u\d\8\7\i\j\s\o\t\3\9\i\1\q\0\h\b\z\t\i\v\7\7\l\n\l\b\i\z\m\h\9\7\m\f\k\c\7\n\h\w\u\m\t\y\g\7\2\y\a\v\h\n\1\g\8\t\i\1\0\t\c\q\c\k\b\o\x\h\5\e\9\l\x\v\4\b\i\h\1\j\d\w\p\k\t\h\8\q\8\g\1\3\p\8\y\v\o\y\5\x\t\t\g\a\6\2\o\6\6\w\z\u\c\x\d\e\8\y\w\l\1\7\o\v\t\g\v\u\8\q\9\8\h\a\p\n\v\f\m\9\z\b\s\x\5\4\t\1\1\i\h\f\8\0\7\v\e\u\o\b\8\x\n\p\l\k\9\h\y\q\x\w\w\v\5\i\8\g\k\f\q\k\3\9\w\f\0\f\y\d\b\o\3\m\i\f\9\r\8\j\r\d\t\j\j\9\o\t\y\g\l\2\g\9\1\8\7\7\2\4\q\1\s\o\c\o\y\v\x\5\2\k\w\y\1\a\o\8\q\a\h\t\z\9\e\x\b\c\0\n\1\m\m\q\w\u\w\5\0\r\3\s\i\p\9\6\v\x\q\9\v\q\w\2\2\3\x\3\1\u\z\2 ]] 00:06:42.830 00:06:42.830 real 0m3.706s 00:06:42.830 user 0m2.054s 00:06:42.830 sys 0m1.440s 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.830 ************************************ 00:06:42.830 END TEST dd_flags_misc 00:06:42.830 ************************************ 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:42.830 * Second test run, disabling liburing, forcing AIO 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:42.830 ************************************ 00:06:42.830 START TEST dd_flag_append_forced_aio 00:06:42.830 ************************************ 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=ikugaucq9q36dg4mwj6rtqeojbgi3i1d 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=l1yl6n5gb5x5y20b4wza90e729ue9nk9 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s ikugaucq9q36dg4mwj6rtqeojbgi3i1d 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s l1yl6n5gb5x5y20b4wza90e729ue9nk9 00:06:42.830 07:10:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:43.089 [2024-07-15 07:10:51.784157] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:43.089 [2024-07-15 07:10:51.784269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63147 ] 00:06:43.089 [2024-07-15 07:10:51.922965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.089 [2024-07-15 07:10:51.981771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.089 [2024-07-15 07:10:52.011670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.347  Copying: 32/32 [B] (average 31 kBps) 00:06:43.347 00:06:43.347 ************************************ 00:06:43.347 END TEST dd_flag_append_forced_aio 00:06:43.347 ************************************ 00:06:43.347 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ l1yl6n5gb5x5y20b4wza90e729ue9nk9ikugaucq9q36dg4mwj6rtqeojbgi3i1d == \l\1\y\l\6\n\5\g\b\5\x\5\y\2\0\b\4\w\z\a\9\0\e\7\2\9\u\e\9\n\k\9\i\k\u\g\a\u\c\q\9\q\3\6\d\g\4\m\w\j\6\r\t\q\e\o\j\b\g\i\3\i\1\d ]] 00:06:43.347 00:06:43.347 real 0m0.491s 00:06:43.347 user 0m0.272s 00:06:43.347 sys 0m0.097s 00:06:43.347 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:43.348 ************************************ 00:06:43.348 START TEST dd_flag_directory_forced_aio 00:06:43.348 ************************************ 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.348 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.606 [2024-07-15 07:10:52.318558] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:43.606 [2024-07-15 07:10:52.318646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63168 ] 00:06:43.606 [2024-07-15 07:10:52.451118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.606 [2024-07-15 07:10:52.510228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.606 [2024-07-15 07:10:52.540096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.606 [2024-07-15 07:10:52.557963] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:43.606 [2024-07-15 07:10:52.558021] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:43.606 [2024-07-15 07:10:52.558037] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.865 [2024-07-15 07:10:52.621951] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.865 07:10:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:43.865 [2024-07-15 07:10:52.758769] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:43.865 [2024-07-15 07:10:52.759066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63183 ] 00:06:44.123 [2024-07-15 07:10:52.893256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.123 [2024-07-15 07:10:52.952689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.123 [2024-07-15 07:10:52.982368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.123 [2024-07-15 07:10:53.000125] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.123 [2024-07-15 07:10:53.000422] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.123 [2024-07-15 07:10:53.000532] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.123 [2024-07-15 07:10:53.064913] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:44.385 ************************************ 00:06:44.385 END TEST dd_flag_directory_forced_aio 00:06:44.385 ************************************ 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.385 00:06:44.385 real 0m0.886s 00:06:44.385 user 0m0.499s 00:06:44.385 sys 0m0.176s 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.385 ************************************ 00:06:44.385 START TEST dd_flag_nofollow_forced_aio 00:06:44.385 ************************************ 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.385 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.385 [2024-07-15 07:10:53.261679] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:44.385 [2024-07-15 07:10:53.261783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63206 ] 00:06:44.644 [2024-07-15 07:10:53.400738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.644 [2024-07-15 07:10:53.460177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.644 [2024-07-15 07:10:53.489801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.644 [2024-07-15 07:10:53.507531] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:44.644 [2024-07-15 07:10:53.507588] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:44.644 [2024-07-15 07:10:53.507604] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.644 [2024-07-15 07:10:53.570709] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.902 07:10:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:44.902 [2024-07-15 07:10:53.705418] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:44.902 [2024-07-15 07:10:53.705511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63221 ] 00:06:44.902 [2024-07-15 07:10:53.836513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.257 [2024-07-15 07:10:53.896244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.257 [2024-07-15 07:10:53.926095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.257 [2024-07-15 07:10:53.943942] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.257 [2024-07-15 07:10:53.944002] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.257 [2024-07-15 07:10:53.944018] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.257 [2024-07-15 07:10:54.009557] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.257 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.257 [2024-07-15 07:10:54.173411] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:45.257 [2024-07-15 07:10:54.173505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63223 ] 00:06:45.515 [2024-07-15 07:10:54.310003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.515 [2024-07-15 07:10:54.369980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.515 [2024-07-15 07:10:54.399906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.774  Copying: 512/512 [B] (average 500 kBps) 00:06:45.774 00:06:45.774 ************************************ 00:06:45.774 END TEST dd_flag_nofollow_forced_aio 00:06:45.774 ************************************ 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ g4koohte77j8fibw5t7k0lcokjjmr5r6igspb6m79802yg4s3h142nbpmtsyza5gfz2rmhoqgj130kbhhpc151k6201mttvx57z9k7l2gfxo0g1d8mx3m2yfq81g2fjzd3m8u13aijpzhbahpwej3g9pqyeesmn8v0zk1s92cruu948a9kmjj497u5ondgf9tppj6446g699qhgls95tnelb32uxz4xy52luuc2j4ndw23vfr508361a4lmdsc6pcx4iv52e9w3qcm86nv4xuxeptluvnvxdo99llisuf3hu2mq5u1ypadk6omvpstmgv5e5d5v2od2vcc36o372qb2va5jb6puziy9cmdbr0fm702y4ab52su1yocqr127gnqh29e058zgzha02ilyqprn3xbiqlyia0glappf6vjvomty9xb6r0ddikzrvfckuv9pvn86u8inj29x5nkwcsckx30q28a6yno5p49cxa8s8f9fxf7lbjkcrsnjhhkeu == \g\4\k\o\o\h\t\e\7\7\j\8\f\i\b\w\5\t\7\k\0\l\c\o\k\j\j\m\r\5\r\6\i\g\s\p\b\6\m\7\9\8\0\2\y\g\4\s\3\h\1\4\2\n\b\p\m\t\s\y\z\a\5\g\f\z\2\r\m\h\o\q\g\j\1\3\0\k\b\h\h\p\c\1\5\1\k\6\2\0\1\m\t\t\v\x\5\7\z\9\k\7\l\2\g\f\x\o\0\g\1\d\8\m\x\3\m\2\y\f\q\8\1\g\2\f\j\z\d\3\m\8\u\1\3\a\i\j\p\z\h\b\a\h\p\w\e\j\3\g\9\p\q\y\e\e\s\m\n\8\v\0\z\k\1\s\9\2\c\r\u\u\9\4\8\a\9\k\m\j\j\4\9\7\u\5\o\n\d\g\f\9\t\p\p\j\6\4\4\6\g\6\9\9\q\h\g\l\s\9\5\t\n\e\l\b\3\2\u\x\z\4\x\y\5\2\l\u\u\c\2\j\4\n\d\w\2\3\v\f\r\5\0\8\3\6\1\a\4\l\m\d\s\c\6\p\c\x\4\i\v\5\2\e\9\w\3\q\c\m\8\6\n\v\4\x\u\x\e\p\t\l\u\v\n\v\x\d\o\9\9\l\l\i\s\u\f\3\h\u\2\m\q\5\u\1\y\p\a\d\k\6\o\m\v\p\s\t\m\g\v\5\e\5\d\5\v\2\o\d\2\v\c\c\3\6\o\3\7\2\q\b\2\v\a\5\j\b\6\p\u\z\i\y\9\c\m\d\b\r\0\f\m\7\0\2\y\4\a\b\5\2\s\u\1\y\o\c\q\r\1\2\7\g\n\q\h\2\9\e\0\5\8\z\g\z\h\a\0\2\i\l\y\q\p\r\n\3\x\b\i\q\l\y\i\a\0\g\l\a\p\p\f\6\v\j\v\o\m\t\y\9\x\b\6\r\0\d\d\i\k\z\r\v\f\c\k\u\v\9\p\v\n\8\6\u\8\i\n\j\2\9\x\5\n\k\w\c\s\c\k\x\3\0\q\2\8\a\6\y\n\o\5\p\4\9\c\x\a\8\s\8\f\9\f\x\f\7\l\b\j\k\c\r\s\n\j\h\h\k\e\u ]] 00:06:45.774 00:06:45.774 real 0m1.390s 00:06:45.774 user 0m0.763s 00:06:45.774 sys 0m0.292s 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.774 ************************************ 00:06:45.774 START TEST dd_flag_noatime_forced_aio 00:06:45.774 ************************************ 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721027454 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721027454 00:06:45.774 07:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:46.713 07:10:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.972 [2024-07-15 07:10:55.715248] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:46.972 [2024-07-15 07:10:55.715351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63264 ] 00:06:46.972 [2024-07-15 07:10:55.856065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.231 [2024-07-15 07:10:55.925804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.231 [2024-07-15 07:10:55.958840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.231  Copying: 512/512 [B] (average 500 kBps) 00:06:47.231 00:06:47.231 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.231 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721027454 )) 00:06:47.231 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.231 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721027454 )) 00:06:47.231 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.490 [2024-07-15 07:10:56.225832] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:47.490 [2024-07-15 07:10:56.225953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63275 ] 00:06:47.490 [2024-07-15 07:10:56.370681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.490 [2024-07-15 07:10:56.430796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.749 [2024-07-15 07:10:56.460875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.749  Copying: 512/512 [B] (average 500 kBps) 00:06:47.749 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721027456 )) 00:06:47.749 00:06:47.749 real 0m2.017s 00:06:47.749 user 0m0.556s 00:06:47.749 sys 0m0.218s 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.749 ************************************ 00:06:47.749 END TEST dd_flag_noatime_forced_aio 00:06:47.749 ************************************ 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.749 07:10:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:48.007 ************************************ 00:06:48.007 START TEST dd_flags_misc_forced_aio 00:06:48.007 ************************************ 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.007 07:10:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:48.007 [2024-07-15 07:10:56.760905] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:48.007 [2024-07-15 07:10:56.761002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63302 ] 00:06:48.007 [2024-07-15 07:10:56.899005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.007 [2024-07-15 07:10:56.958757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.267 [2024-07-15 07:10:56.988500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.267  Copying: 512/512 [B] (average 500 kBps) 00:06:48.267 00:06:48.267 07:10:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bbcim2ppw7tgg51qujfxtfrv7pp3ipsgtj9s2rm9710t7bpv5497eere9lrc1a1ok6cxkcbcg9100dd8ruk6d9pzmctw4shx83a7mlsijg7iwjiv07xzl60j87e6buamnwccdpzn1i4gsr4p0brnd7rohmi8guu9w8dirtfvjwywdw4k3owinjwrwjx4ybjm90kq8w5ob3lhe6wkakehvjydgncj3cejs5bqhfirsau3lizejc2nge98ex6dki6y6do1g0pk16hz80dd60zlq0paa6c17rhsto9ydypt7amk832oghospsb3s01whcbipyc3j8zv81iy4bgdegji6h40degiyl87xl449bj0le8vl6ydwkgz18mdjgmjus8c2cf0jrh7zkkxnkqbuzvpfs6ffb92xao6h40fl7fsnk452ybtxkhmqhtha7mbh79agqhiw867twjwym78fnn63f1kypnpo5iuptufcm7r4o5xo9em4kcucvgej7vw9f7p == \b\b\c\i\m\2\p\p\w\7\t\g\g\5\1\q\u\j\f\x\t\f\r\v\7\p\p\3\i\p\s\g\t\j\9\s\2\r\m\9\7\1\0\t\7\b\p\v\5\4\9\7\e\e\r\e\9\l\r\c\1\a\1\o\k\6\c\x\k\c\b\c\g\9\1\0\0\d\d\8\r\u\k\6\d\9\p\z\m\c\t\w\4\s\h\x\8\3\a\7\m\l\s\i\j\g\7\i\w\j\i\v\0\7\x\z\l\6\0\j\8\7\e\6\b\u\a\m\n\w\c\c\d\p\z\n\1\i\4\g\s\r\4\p\0\b\r\n\d\7\r\o\h\m\i\8\g\u\u\9\w\8\d\i\r\t\f\v\j\w\y\w\d\w\4\k\3\o\w\i\n\j\w\r\w\j\x\4\y\b\j\m\9\0\k\q\8\w\5\o\b\3\l\h\e\6\w\k\a\k\e\h\v\j\y\d\g\n\c\j\3\c\e\j\s\5\b\q\h\f\i\r\s\a\u\3\l\i\z\e\j\c\2\n\g\e\9\8\e\x\6\d\k\i\6\y\6\d\o\1\g\0\p\k\1\6\h\z\8\0\d\d\6\0\z\l\q\0\p\a\a\6\c\1\7\r\h\s\t\o\9\y\d\y\p\t\7\a\m\k\8\3\2\o\g\h\o\s\p\s\b\3\s\0\1\w\h\c\b\i\p\y\c\3\j\8\z\v\8\1\i\y\4\b\g\d\e\g\j\i\6\h\4\0\d\e\g\i\y\l\8\7\x\l\4\4\9\b\j\0\l\e\8\v\l\6\y\d\w\k\g\z\1\8\m\d\j\g\m\j\u\s\8\c\2\c\f\0\j\r\h\7\z\k\k\x\n\k\q\b\u\z\v\p\f\s\6\f\f\b\9\2\x\a\o\6\h\4\0\f\l\7\f\s\n\k\4\5\2\y\b\t\x\k\h\m\q\h\t\h\a\7\m\b\h\7\9\a\g\q\h\i\w\8\6\7\t\w\j\w\y\m\7\8\f\n\n\6\3\f\1\k\y\p\n\p\o\5\i\u\p\t\u\f\c\m\7\r\4\o\5\x\o\9\e\m\4\k\c\u\c\v\g\e\j\7\v\w\9\f\7\p ]] 00:06:48.267 07:10:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.267 07:10:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:48.526 [2024-07-15 07:10:57.226098] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:48.526 [2024-07-15 07:10:57.226205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:06:48.526 [2024-07-15 07:10:57.365309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.526 [2024-07-15 07:10:57.427515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.526 [2024-07-15 07:10:57.457707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.784  Copying: 512/512 [B] (average 500 kBps) 00:06:48.784 00:06:48.784 07:10:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bbcim2ppw7tgg51qujfxtfrv7pp3ipsgtj9s2rm9710t7bpv5497eere9lrc1a1ok6cxkcbcg9100dd8ruk6d9pzmctw4shx83a7mlsijg7iwjiv07xzl60j87e6buamnwccdpzn1i4gsr4p0brnd7rohmi8guu9w8dirtfvjwywdw4k3owinjwrwjx4ybjm90kq8w5ob3lhe6wkakehvjydgncj3cejs5bqhfirsau3lizejc2nge98ex6dki6y6do1g0pk16hz80dd60zlq0paa6c17rhsto9ydypt7amk832oghospsb3s01whcbipyc3j8zv81iy4bgdegji6h40degiyl87xl449bj0le8vl6ydwkgz18mdjgmjus8c2cf0jrh7zkkxnkqbuzvpfs6ffb92xao6h40fl7fsnk452ybtxkhmqhtha7mbh79agqhiw867twjwym78fnn63f1kypnpo5iuptufcm7r4o5xo9em4kcucvgej7vw9f7p == \b\b\c\i\m\2\p\p\w\7\t\g\g\5\1\q\u\j\f\x\t\f\r\v\7\p\p\3\i\p\s\g\t\j\9\s\2\r\m\9\7\1\0\t\7\b\p\v\5\4\9\7\e\e\r\e\9\l\r\c\1\a\1\o\k\6\c\x\k\c\b\c\g\9\1\0\0\d\d\8\r\u\k\6\d\9\p\z\m\c\t\w\4\s\h\x\8\3\a\7\m\l\s\i\j\g\7\i\w\j\i\v\0\7\x\z\l\6\0\j\8\7\e\6\b\u\a\m\n\w\c\c\d\p\z\n\1\i\4\g\s\r\4\p\0\b\r\n\d\7\r\o\h\m\i\8\g\u\u\9\w\8\d\i\r\t\f\v\j\w\y\w\d\w\4\k\3\o\w\i\n\j\w\r\w\j\x\4\y\b\j\m\9\0\k\q\8\w\5\o\b\3\l\h\e\6\w\k\a\k\e\h\v\j\y\d\g\n\c\j\3\c\e\j\s\5\b\q\h\f\i\r\s\a\u\3\l\i\z\e\j\c\2\n\g\e\9\8\e\x\6\d\k\i\6\y\6\d\o\1\g\0\p\k\1\6\h\z\8\0\d\d\6\0\z\l\q\0\p\a\a\6\c\1\7\r\h\s\t\o\9\y\d\y\p\t\7\a\m\k\8\3\2\o\g\h\o\s\p\s\b\3\s\0\1\w\h\c\b\i\p\y\c\3\j\8\z\v\8\1\i\y\4\b\g\d\e\g\j\i\6\h\4\0\d\e\g\i\y\l\8\7\x\l\4\4\9\b\j\0\l\e\8\v\l\6\y\d\w\k\g\z\1\8\m\d\j\g\m\j\u\s\8\c\2\c\f\0\j\r\h\7\z\k\k\x\n\k\q\b\u\z\v\p\f\s\6\f\f\b\9\2\x\a\o\6\h\4\0\f\l\7\f\s\n\k\4\5\2\y\b\t\x\k\h\m\q\h\t\h\a\7\m\b\h\7\9\a\g\q\h\i\w\8\6\7\t\w\j\w\y\m\7\8\f\n\n\6\3\f\1\k\y\p\n\p\o\5\i\u\p\t\u\f\c\m\7\r\4\o\5\x\o\9\e\m\4\k\c\u\c\v\g\e\j\7\v\w\9\f\7\p ]] 00:06:48.784 07:10:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.784 07:10:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:48.784 [2024-07-15 07:10:57.711918] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:48.784 [2024-07-15 07:10:57.712022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63317 ] 00:06:49.043 [2024-07-15 07:10:57.850111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.043 [2024-07-15 07:10:57.910448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.043 [2024-07-15 07:10:57.940027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.303  Copying: 512/512 [B] (average 500 kBps) 00:06:49.303 00:06:49.303 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bbcim2ppw7tgg51qujfxtfrv7pp3ipsgtj9s2rm9710t7bpv5497eere9lrc1a1ok6cxkcbcg9100dd8ruk6d9pzmctw4shx83a7mlsijg7iwjiv07xzl60j87e6buamnwccdpzn1i4gsr4p0brnd7rohmi8guu9w8dirtfvjwywdw4k3owinjwrwjx4ybjm90kq8w5ob3lhe6wkakehvjydgncj3cejs5bqhfirsau3lizejc2nge98ex6dki6y6do1g0pk16hz80dd60zlq0paa6c17rhsto9ydypt7amk832oghospsb3s01whcbipyc3j8zv81iy4bgdegji6h40degiyl87xl449bj0le8vl6ydwkgz18mdjgmjus8c2cf0jrh7zkkxnkqbuzvpfs6ffb92xao6h40fl7fsnk452ybtxkhmqhtha7mbh79agqhiw867twjwym78fnn63f1kypnpo5iuptufcm7r4o5xo9em4kcucvgej7vw9f7p == \b\b\c\i\m\2\p\p\w\7\t\g\g\5\1\q\u\j\f\x\t\f\r\v\7\p\p\3\i\p\s\g\t\j\9\s\2\r\m\9\7\1\0\t\7\b\p\v\5\4\9\7\e\e\r\e\9\l\r\c\1\a\1\o\k\6\c\x\k\c\b\c\g\9\1\0\0\d\d\8\r\u\k\6\d\9\p\z\m\c\t\w\4\s\h\x\8\3\a\7\m\l\s\i\j\g\7\i\w\j\i\v\0\7\x\z\l\6\0\j\8\7\e\6\b\u\a\m\n\w\c\c\d\p\z\n\1\i\4\g\s\r\4\p\0\b\r\n\d\7\r\o\h\m\i\8\g\u\u\9\w\8\d\i\r\t\f\v\j\w\y\w\d\w\4\k\3\o\w\i\n\j\w\r\w\j\x\4\y\b\j\m\9\0\k\q\8\w\5\o\b\3\l\h\e\6\w\k\a\k\e\h\v\j\y\d\g\n\c\j\3\c\e\j\s\5\b\q\h\f\i\r\s\a\u\3\l\i\z\e\j\c\2\n\g\e\9\8\e\x\6\d\k\i\6\y\6\d\o\1\g\0\p\k\1\6\h\z\8\0\d\d\6\0\z\l\q\0\p\a\a\6\c\1\7\r\h\s\t\o\9\y\d\y\p\t\7\a\m\k\8\3\2\o\g\h\o\s\p\s\b\3\s\0\1\w\h\c\b\i\p\y\c\3\j\8\z\v\8\1\i\y\4\b\g\d\e\g\j\i\6\h\4\0\d\e\g\i\y\l\8\7\x\l\4\4\9\b\j\0\l\e\8\v\l\6\y\d\w\k\g\z\1\8\m\d\j\g\m\j\u\s\8\c\2\c\f\0\j\r\h\7\z\k\k\x\n\k\q\b\u\z\v\p\f\s\6\f\f\b\9\2\x\a\o\6\h\4\0\f\l\7\f\s\n\k\4\5\2\y\b\t\x\k\h\m\q\h\t\h\a\7\m\b\h\7\9\a\g\q\h\i\w\8\6\7\t\w\j\w\y\m\7\8\f\n\n\6\3\f\1\k\y\p\n\p\o\5\i\u\p\t\u\f\c\m\7\r\4\o\5\x\o\9\e\m\4\k\c\u\c\v\g\e\j\7\v\w\9\f\7\p ]] 00:06:49.303 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.303 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:49.303 [2024-07-15 07:10:58.214566] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:49.303 [2024-07-15 07:10:58.214716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63324 ] 00:06:49.562 [2024-07-15 07:10:58.364915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.562 [2024-07-15 07:10:58.424712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.562 [2024-07-15 07:10:58.455525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.821  Copying: 512/512 [B] (average 500 kBps) 00:06:49.821 00:06:49.821 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ bbcim2ppw7tgg51qujfxtfrv7pp3ipsgtj9s2rm9710t7bpv5497eere9lrc1a1ok6cxkcbcg9100dd8ruk6d9pzmctw4shx83a7mlsijg7iwjiv07xzl60j87e6buamnwccdpzn1i4gsr4p0brnd7rohmi8guu9w8dirtfvjwywdw4k3owinjwrwjx4ybjm90kq8w5ob3lhe6wkakehvjydgncj3cejs5bqhfirsau3lizejc2nge98ex6dki6y6do1g0pk16hz80dd60zlq0paa6c17rhsto9ydypt7amk832oghospsb3s01whcbipyc3j8zv81iy4bgdegji6h40degiyl87xl449bj0le8vl6ydwkgz18mdjgmjus8c2cf0jrh7zkkxnkqbuzvpfs6ffb92xao6h40fl7fsnk452ybtxkhmqhtha7mbh79agqhiw867twjwym78fnn63f1kypnpo5iuptufcm7r4o5xo9em4kcucvgej7vw9f7p == \b\b\c\i\m\2\p\p\w\7\t\g\g\5\1\q\u\j\f\x\t\f\r\v\7\p\p\3\i\p\s\g\t\j\9\s\2\r\m\9\7\1\0\t\7\b\p\v\5\4\9\7\e\e\r\e\9\l\r\c\1\a\1\o\k\6\c\x\k\c\b\c\g\9\1\0\0\d\d\8\r\u\k\6\d\9\p\z\m\c\t\w\4\s\h\x\8\3\a\7\m\l\s\i\j\g\7\i\w\j\i\v\0\7\x\z\l\6\0\j\8\7\e\6\b\u\a\m\n\w\c\c\d\p\z\n\1\i\4\g\s\r\4\p\0\b\r\n\d\7\r\o\h\m\i\8\g\u\u\9\w\8\d\i\r\t\f\v\j\w\y\w\d\w\4\k\3\o\w\i\n\j\w\r\w\j\x\4\y\b\j\m\9\0\k\q\8\w\5\o\b\3\l\h\e\6\w\k\a\k\e\h\v\j\y\d\g\n\c\j\3\c\e\j\s\5\b\q\h\f\i\r\s\a\u\3\l\i\z\e\j\c\2\n\g\e\9\8\e\x\6\d\k\i\6\y\6\d\o\1\g\0\p\k\1\6\h\z\8\0\d\d\6\0\z\l\q\0\p\a\a\6\c\1\7\r\h\s\t\o\9\y\d\y\p\t\7\a\m\k\8\3\2\o\g\h\o\s\p\s\b\3\s\0\1\w\h\c\b\i\p\y\c\3\j\8\z\v\8\1\i\y\4\b\g\d\e\g\j\i\6\h\4\0\d\e\g\i\y\l\8\7\x\l\4\4\9\b\j\0\l\e\8\v\l\6\y\d\w\k\g\z\1\8\m\d\j\g\m\j\u\s\8\c\2\c\f\0\j\r\h\7\z\k\k\x\n\k\q\b\u\z\v\p\f\s\6\f\f\b\9\2\x\a\o\6\h\4\0\f\l\7\f\s\n\k\4\5\2\y\b\t\x\k\h\m\q\h\t\h\a\7\m\b\h\7\9\a\g\q\h\i\w\8\6\7\t\w\j\w\y\m\7\8\f\n\n\6\3\f\1\k\y\p\n\p\o\5\i\u\p\t\u\f\c\m\7\r\4\o\5\x\o\9\e\m\4\k\c\u\c\v\g\e\j\7\v\w\9\f\7\p ]] 00:06:49.821 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:49.821 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:49.821 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:49.821 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:49.821 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.821 07:10:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:49.821 [2024-07-15 07:10:58.718436] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:49.821 [2024-07-15 07:10:58.718554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63332 ] 00:06:50.079 [2024-07-15 07:10:58.861696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.079 [2024-07-15 07:10:58.921570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.079 [2024-07-15 07:10:58.951868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.338  Copying: 512/512 [B] (average 500 kBps) 00:06:50.338 00:06:50.338 07:10:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tgesnfq177m7a9ppf7znos0czkglsduw9zdc1tpuy94rl2k7vrs8wckmh15wggvkwt1sudx0cxo25yep6pmyljz758kpnjzkknrkgoqjch5ywefpviedhgu2vu7qze7l47e60x1idaoti62sxh8t10ujfxe59mapny8cddap8pb0ql3pnwj0cazl3q149rgjeqwd325q0ebvtwd00tzob9glo7f65wr2c7u5lvernmfq20dc2x2fmq15sc61241i4qijh2ci5qm9i9tz6rg0isjrltta9757odnndtbd9exux4fpflg301g8jo3r64157mnazzen3r55hskfer97jeikx9pbaj7684ox79gn6vyh0k93j6l8gd02ifpatqnneftl21vioaf735currwhfeuquxphhvhck7f83u9vq2vj5bconcnux3mwnd2in1ffkpdfacicxns54o8ajxryhd3c8j88hrhalpw48dookwmgahivl2ozu587f2szdncf == \t\g\e\s\n\f\q\1\7\7\m\7\a\9\p\p\f\7\z\n\o\s\0\c\z\k\g\l\s\d\u\w\9\z\d\c\1\t\p\u\y\9\4\r\l\2\k\7\v\r\s\8\w\c\k\m\h\1\5\w\g\g\v\k\w\t\1\s\u\d\x\0\c\x\o\2\5\y\e\p\6\p\m\y\l\j\z\7\5\8\k\p\n\j\z\k\k\n\r\k\g\o\q\j\c\h\5\y\w\e\f\p\v\i\e\d\h\g\u\2\v\u\7\q\z\e\7\l\4\7\e\6\0\x\1\i\d\a\o\t\i\6\2\s\x\h\8\t\1\0\u\j\f\x\e\5\9\m\a\p\n\y\8\c\d\d\a\p\8\p\b\0\q\l\3\p\n\w\j\0\c\a\z\l\3\q\1\4\9\r\g\j\e\q\w\d\3\2\5\q\0\e\b\v\t\w\d\0\0\t\z\o\b\9\g\l\o\7\f\6\5\w\r\2\c\7\u\5\l\v\e\r\n\m\f\q\2\0\d\c\2\x\2\f\m\q\1\5\s\c\6\1\2\4\1\i\4\q\i\j\h\2\c\i\5\q\m\9\i\9\t\z\6\r\g\0\i\s\j\r\l\t\t\a\9\7\5\7\o\d\n\n\d\t\b\d\9\e\x\u\x\4\f\p\f\l\g\3\0\1\g\8\j\o\3\r\6\4\1\5\7\m\n\a\z\z\e\n\3\r\5\5\h\s\k\f\e\r\9\7\j\e\i\k\x\9\p\b\a\j\7\6\8\4\o\x\7\9\g\n\6\v\y\h\0\k\9\3\j\6\l\8\g\d\0\2\i\f\p\a\t\q\n\n\e\f\t\l\2\1\v\i\o\a\f\7\3\5\c\u\r\r\w\h\f\e\u\q\u\x\p\h\h\v\h\c\k\7\f\8\3\u\9\v\q\2\v\j\5\b\c\o\n\c\n\u\x\3\m\w\n\d\2\i\n\1\f\f\k\p\d\f\a\c\i\c\x\n\s\5\4\o\8\a\j\x\r\y\h\d\3\c\8\j\8\8\h\r\h\a\l\p\w\4\8\d\o\o\k\w\m\g\a\h\i\v\l\2\o\z\u\5\8\7\f\2\s\z\d\n\c\f ]] 00:06:50.338 07:10:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.338 07:10:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:50.338 [2024-07-15 07:10:59.200007] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:50.338 [2024-07-15 07:10:59.200121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63339 ] 00:06:50.595 [2024-07-15 07:10:59.338334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.595 [2024-07-15 07:10:59.399375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.595 [2024-07-15 07:10:59.430412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.853  Copying: 512/512 [B] (average 500 kBps) 00:06:50.853 00:06:50.853 07:10:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tgesnfq177m7a9ppf7znos0czkglsduw9zdc1tpuy94rl2k7vrs8wckmh15wggvkwt1sudx0cxo25yep6pmyljz758kpnjzkknrkgoqjch5ywefpviedhgu2vu7qze7l47e60x1idaoti62sxh8t10ujfxe59mapny8cddap8pb0ql3pnwj0cazl3q149rgjeqwd325q0ebvtwd00tzob9glo7f65wr2c7u5lvernmfq20dc2x2fmq15sc61241i4qijh2ci5qm9i9tz6rg0isjrltta9757odnndtbd9exux4fpflg301g8jo3r64157mnazzen3r55hskfer97jeikx9pbaj7684ox79gn6vyh0k93j6l8gd02ifpatqnneftl21vioaf735currwhfeuquxphhvhck7f83u9vq2vj5bconcnux3mwnd2in1ffkpdfacicxns54o8ajxryhd3c8j88hrhalpw48dookwmgahivl2ozu587f2szdncf == \t\g\e\s\n\f\q\1\7\7\m\7\a\9\p\p\f\7\z\n\o\s\0\c\z\k\g\l\s\d\u\w\9\z\d\c\1\t\p\u\y\9\4\r\l\2\k\7\v\r\s\8\w\c\k\m\h\1\5\w\g\g\v\k\w\t\1\s\u\d\x\0\c\x\o\2\5\y\e\p\6\p\m\y\l\j\z\7\5\8\k\p\n\j\z\k\k\n\r\k\g\o\q\j\c\h\5\y\w\e\f\p\v\i\e\d\h\g\u\2\v\u\7\q\z\e\7\l\4\7\e\6\0\x\1\i\d\a\o\t\i\6\2\s\x\h\8\t\1\0\u\j\f\x\e\5\9\m\a\p\n\y\8\c\d\d\a\p\8\p\b\0\q\l\3\p\n\w\j\0\c\a\z\l\3\q\1\4\9\r\g\j\e\q\w\d\3\2\5\q\0\e\b\v\t\w\d\0\0\t\z\o\b\9\g\l\o\7\f\6\5\w\r\2\c\7\u\5\l\v\e\r\n\m\f\q\2\0\d\c\2\x\2\f\m\q\1\5\s\c\6\1\2\4\1\i\4\q\i\j\h\2\c\i\5\q\m\9\i\9\t\z\6\r\g\0\i\s\j\r\l\t\t\a\9\7\5\7\o\d\n\n\d\t\b\d\9\e\x\u\x\4\f\p\f\l\g\3\0\1\g\8\j\o\3\r\6\4\1\5\7\m\n\a\z\z\e\n\3\r\5\5\h\s\k\f\e\r\9\7\j\e\i\k\x\9\p\b\a\j\7\6\8\4\o\x\7\9\g\n\6\v\y\h\0\k\9\3\j\6\l\8\g\d\0\2\i\f\p\a\t\q\n\n\e\f\t\l\2\1\v\i\o\a\f\7\3\5\c\u\r\r\w\h\f\e\u\q\u\x\p\h\h\v\h\c\k\7\f\8\3\u\9\v\q\2\v\j\5\b\c\o\n\c\n\u\x\3\m\w\n\d\2\i\n\1\f\f\k\p\d\f\a\c\i\c\x\n\s\5\4\o\8\a\j\x\r\y\h\d\3\c\8\j\8\8\h\r\h\a\l\p\w\4\8\d\o\o\k\w\m\g\a\h\i\v\l\2\o\z\u\5\8\7\f\2\s\z\d\n\c\f ]] 00:06:50.853 07:10:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.853 07:10:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:50.853 [2024-07-15 07:10:59.674322] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:50.853 [2024-07-15 07:10:59.674418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63347 ] 00:06:51.111 [2024-07-15 07:10:59.813433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.111 [2024-07-15 07:10:59.872421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.111 [2024-07-15 07:10:59.902775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.369  Copying: 512/512 [B] (average 500 kBps) 00:06:51.369 00:06:51.369 07:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tgesnfq177m7a9ppf7znos0czkglsduw9zdc1tpuy94rl2k7vrs8wckmh15wggvkwt1sudx0cxo25yep6pmyljz758kpnjzkknrkgoqjch5ywefpviedhgu2vu7qze7l47e60x1idaoti62sxh8t10ujfxe59mapny8cddap8pb0ql3pnwj0cazl3q149rgjeqwd325q0ebvtwd00tzob9glo7f65wr2c7u5lvernmfq20dc2x2fmq15sc61241i4qijh2ci5qm9i9tz6rg0isjrltta9757odnndtbd9exux4fpflg301g8jo3r64157mnazzen3r55hskfer97jeikx9pbaj7684ox79gn6vyh0k93j6l8gd02ifpatqnneftl21vioaf735currwhfeuquxphhvhck7f83u9vq2vj5bconcnux3mwnd2in1ffkpdfacicxns54o8ajxryhd3c8j88hrhalpw48dookwmgahivl2ozu587f2szdncf == \t\g\e\s\n\f\q\1\7\7\m\7\a\9\p\p\f\7\z\n\o\s\0\c\z\k\g\l\s\d\u\w\9\z\d\c\1\t\p\u\y\9\4\r\l\2\k\7\v\r\s\8\w\c\k\m\h\1\5\w\g\g\v\k\w\t\1\s\u\d\x\0\c\x\o\2\5\y\e\p\6\p\m\y\l\j\z\7\5\8\k\p\n\j\z\k\k\n\r\k\g\o\q\j\c\h\5\y\w\e\f\p\v\i\e\d\h\g\u\2\v\u\7\q\z\e\7\l\4\7\e\6\0\x\1\i\d\a\o\t\i\6\2\s\x\h\8\t\1\0\u\j\f\x\e\5\9\m\a\p\n\y\8\c\d\d\a\p\8\p\b\0\q\l\3\p\n\w\j\0\c\a\z\l\3\q\1\4\9\r\g\j\e\q\w\d\3\2\5\q\0\e\b\v\t\w\d\0\0\t\z\o\b\9\g\l\o\7\f\6\5\w\r\2\c\7\u\5\l\v\e\r\n\m\f\q\2\0\d\c\2\x\2\f\m\q\1\5\s\c\6\1\2\4\1\i\4\q\i\j\h\2\c\i\5\q\m\9\i\9\t\z\6\r\g\0\i\s\j\r\l\t\t\a\9\7\5\7\o\d\n\n\d\t\b\d\9\e\x\u\x\4\f\p\f\l\g\3\0\1\g\8\j\o\3\r\6\4\1\5\7\m\n\a\z\z\e\n\3\r\5\5\h\s\k\f\e\r\9\7\j\e\i\k\x\9\p\b\a\j\7\6\8\4\o\x\7\9\g\n\6\v\y\h\0\k\9\3\j\6\l\8\g\d\0\2\i\f\p\a\t\q\n\n\e\f\t\l\2\1\v\i\o\a\f\7\3\5\c\u\r\r\w\h\f\e\u\q\u\x\p\h\h\v\h\c\k\7\f\8\3\u\9\v\q\2\v\j\5\b\c\o\n\c\n\u\x\3\m\w\n\d\2\i\n\1\f\f\k\p\d\f\a\c\i\c\x\n\s\5\4\o\8\a\j\x\r\y\h\d\3\c\8\j\8\8\h\r\h\a\l\p\w\4\8\d\o\o\k\w\m\g\a\h\i\v\l\2\o\z\u\5\8\7\f\2\s\z\d\n\c\f ]] 00:06:51.369 07:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.369 07:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:51.369 [2024-07-15 07:11:00.160714] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:51.369 [2024-07-15 07:11:00.160811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63354 ] 00:06:51.369 [2024-07-15 07:11:00.298656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.627 [2024-07-15 07:11:00.371256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.627 [2024-07-15 07:11:00.404430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.885  Copying: 512/512 [B] (average 500 kBps) 00:06:51.885 00:06:51.885 ************************************ 00:06:51.885 END TEST dd_flags_misc_forced_aio 00:06:51.885 ************************************ 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tgesnfq177m7a9ppf7znos0czkglsduw9zdc1tpuy94rl2k7vrs8wckmh15wggvkwt1sudx0cxo25yep6pmyljz758kpnjzkknrkgoqjch5ywefpviedhgu2vu7qze7l47e60x1idaoti62sxh8t10ujfxe59mapny8cddap8pb0ql3pnwj0cazl3q149rgjeqwd325q0ebvtwd00tzob9glo7f65wr2c7u5lvernmfq20dc2x2fmq15sc61241i4qijh2ci5qm9i9tz6rg0isjrltta9757odnndtbd9exux4fpflg301g8jo3r64157mnazzen3r55hskfer97jeikx9pbaj7684ox79gn6vyh0k93j6l8gd02ifpatqnneftl21vioaf735currwhfeuquxphhvhck7f83u9vq2vj5bconcnux3mwnd2in1ffkpdfacicxns54o8ajxryhd3c8j88hrhalpw48dookwmgahivl2ozu587f2szdncf == \t\g\e\s\n\f\q\1\7\7\m\7\a\9\p\p\f\7\z\n\o\s\0\c\z\k\g\l\s\d\u\w\9\z\d\c\1\t\p\u\y\9\4\r\l\2\k\7\v\r\s\8\w\c\k\m\h\1\5\w\g\g\v\k\w\t\1\s\u\d\x\0\c\x\o\2\5\y\e\p\6\p\m\y\l\j\z\7\5\8\k\p\n\j\z\k\k\n\r\k\g\o\q\j\c\h\5\y\w\e\f\p\v\i\e\d\h\g\u\2\v\u\7\q\z\e\7\l\4\7\e\6\0\x\1\i\d\a\o\t\i\6\2\s\x\h\8\t\1\0\u\j\f\x\e\5\9\m\a\p\n\y\8\c\d\d\a\p\8\p\b\0\q\l\3\p\n\w\j\0\c\a\z\l\3\q\1\4\9\r\g\j\e\q\w\d\3\2\5\q\0\e\b\v\t\w\d\0\0\t\z\o\b\9\g\l\o\7\f\6\5\w\r\2\c\7\u\5\l\v\e\r\n\m\f\q\2\0\d\c\2\x\2\f\m\q\1\5\s\c\6\1\2\4\1\i\4\q\i\j\h\2\c\i\5\q\m\9\i\9\t\z\6\r\g\0\i\s\j\r\l\t\t\a\9\7\5\7\o\d\n\n\d\t\b\d\9\e\x\u\x\4\f\p\f\l\g\3\0\1\g\8\j\o\3\r\6\4\1\5\7\m\n\a\z\z\e\n\3\r\5\5\h\s\k\f\e\r\9\7\j\e\i\k\x\9\p\b\a\j\7\6\8\4\o\x\7\9\g\n\6\v\y\h\0\k\9\3\j\6\l\8\g\d\0\2\i\f\p\a\t\q\n\n\e\f\t\l\2\1\v\i\o\a\f\7\3\5\c\u\r\r\w\h\f\e\u\q\u\x\p\h\h\v\h\c\k\7\f\8\3\u\9\v\q\2\v\j\5\b\c\o\n\c\n\u\x\3\m\w\n\d\2\i\n\1\f\f\k\p\d\f\a\c\i\c\x\n\s\5\4\o\8\a\j\x\r\y\h\d\3\c\8\j\8\8\h\r\h\a\l\p\w\4\8\d\o\o\k\w\m\g\a\h\i\v\l\2\o\z\u\5\8\7\f\2\s\z\d\n\c\f ]] 00:06:51.885 00:06:51.885 real 0m3.920s 00:06:51.885 user 0m2.153s 00:06:51.885 sys 0m0.785s 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:51.885 ************************************ 00:06:51.885 END TEST spdk_dd_posix 00:06:51.885 ************************************ 00:06:51.885 00:06:51.885 real 0m17.710s 00:06:51.885 user 0m8.575s 00:06:51.885 sys 0m4.441s 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.885 07:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 07:11:00 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:51.885 07:11:00 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:51.885 07:11:00 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.885 07:11:00 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.885 07:11:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 ************************************ 00:06:51.885 START TEST spdk_dd_malloc 00:06:51.885 ************************************ 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:51.886 * Looking for test storage... 00:06:51.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:51.886 ************************************ 00:06:51.886 START TEST dd_malloc_copy 00:06:51.886 ************************************ 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:51.886 07:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.144 [2024-07-15 07:11:00.876922] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:52.144 [2024-07-15 07:11:00.877017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63423 ] 00:06:52.144 { 00:06:52.144 "subsystems": [ 00:06:52.144 { 00:06:52.144 "subsystem": "bdev", 00:06:52.144 "config": [ 00:06:52.144 { 00:06:52.144 "params": { 00:06:52.144 "block_size": 512, 00:06:52.144 "num_blocks": 1048576, 00:06:52.144 "name": "malloc0" 00:06:52.144 }, 00:06:52.144 "method": "bdev_malloc_create" 00:06:52.144 }, 00:06:52.144 { 00:06:52.144 "params": { 00:06:52.144 "block_size": 512, 00:06:52.144 "num_blocks": 1048576, 00:06:52.144 "name": "malloc1" 00:06:52.144 }, 00:06:52.144 "method": "bdev_malloc_create" 00:06:52.144 }, 00:06:52.144 { 00:06:52.144 "method": "bdev_wait_for_examine" 00:06:52.144 } 00:06:52.144 ] 00:06:52.144 } 00:06:52.144 ] 00:06:52.144 } 00:06:52.144 [2024-07-15 07:11:01.015279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.144 [2024-07-15 07:11:01.077645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.402 [2024-07-15 07:11:01.108004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.538  Copying: 204/512 [MB] (204 MBps) Copying: 406/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 201 MBps) 00:06:55.538 00:06:55.538 07:11:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:55.538 07:11:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:55.538 07:11:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:55.538 07:11:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.538 [2024-07-15 07:11:04.261722] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:55.538 [2024-07-15 07:11:04.261810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63469 ] 00:06:55.538 { 00:06:55.538 "subsystems": [ 00:06:55.538 { 00:06:55.538 "subsystem": "bdev", 00:06:55.538 "config": [ 00:06:55.538 { 00:06:55.538 "params": { 00:06:55.538 "block_size": 512, 00:06:55.538 "num_blocks": 1048576, 00:06:55.538 "name": "malloc0" 00:06:55.538 }, 00:06:55.538 "method": "bdev_malloc_create" 00:06:55.538 }, 00:06:55.538 { 00:06:55.538 "params": { 00:06:55.538 "block_size": 512, 00:06:55.538 "num_blocks": 1048576, 00:06:55.538 "name": "malloc1" 00:06:55.538 }, 00:06:55.538 "method": "bdev_malloc_create" 00:06:55.538 }, 00:06:55.538 { 00:06:55.538 "method": "bdev_wait_for_examine" 00:06:55.538 } 00:06:55.538 ] 00:06:55.538 } 00:06:55.538 ] 00:06:55.538 } 00:06:55.538 [2024-07-15 07:11:04.399185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.538 [2024-07-15 07:11:04.457808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.538 [2024-07-15 07:11:04.487612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.985  Copying: 196/512 [MB] (196 MBps) Copying: 393/512 [MB] (197 MBps) Copying: 512/512 [MB] (average 196 MBps) 00:06:58.985 00:06:58.985 ************************************ 00:06:58.985 END TEST dd_malloc_copy 00:06:58.985 ************************************ 00:06:58.985 00:06:58.985 real 0m6.829s 00:06:58.985 user 0m6.173s 00:06:58.985 sys 0m0.490s 00:06:58.985 07:11:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.985 07:11:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.985 07:11:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:06:58.985 00:06:58.985 real 0m6.968s 00:06:58.985 user 0m6.225s 00:06:58.985 sys 0m0.574s 00:06:58.985 07:11:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.985 07:11:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:58.985 ************************************ 00:06:58.985 END TEST spdk_dd_malloc 00:06:58.985 ************************************ 00:06:58.985 07:11:07 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:58.985 07:11:07 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:58.985 07:11:07 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.985 07:11:07 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.985 07:11:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.985 ************************************ 00:06:58.985 START TEST spdk_dd_bdev_to_bdev 00:06:58.985 ************************************ 00:06:58.985 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:58.985 * Looking for test storage... 00:06:58.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.985 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.985 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.986 ************************************ 00:06:58.986 START TEST dd_inflate_file 00:06:58.986 ************************************ 00:06:58.986 07:11:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:58.986 [2024-07-15 07:11:07.891650] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:58.986 [2024-07-15 07:11:07.891749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63575 ] 00:06:59.245 [2024-07-15 07:11:08.026124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.245 [2024-07-15 07:11:08.086371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.245 [2024-07-15 07:11:08.116405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.504  Copying: 64/64 [MB] (average 1641 MBps) 00:06:59.504 00:06:59.504 00:06:59.504 real 0m0.490s 00:06:59.504 user 0m0.294s 00:06:59.505 sys 0m0.213s 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:59.505 ************************************ 00:06:59.505 END TEST dd_inflate_file 00:06:59.505 ************************************ 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.505 ************************************ 00:06:59.505 START TEST dd_copy_to_out_bdev 00:06:59.505 ************************************ 00:06:59.505 07:11:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:59.505 { 00:06:59.505 "subsystems": [ 00:06:59.505 { 00:06:59.505 "subsystem": "bdev", 00:06:59.505 "config": [ 00:06:59.505 { 00:06:59.505 "params": { 00:06:59.505 "trtype": "pcie", 00:06:59.505 "traddr": "0000:00:10.0", 00:06:59.505 "name": "Nvme0" 00:06:59.505 }, 00:06:59.505 "method": "bdev_nvme_attach_controller" 00:06:59.505 }, 00:06:59.505 { 00:06:59.505 "params": { 00:06:59.505 "trtype": "pcie", 00:06:59.505 "traddr": "0000:00:11.0", 00:06:59.505 "name": "Nvme1" 00:06:59.505 }, 00:06:59.505 "method": "bdev_nvme_attach_controller" 00:06:59.505 }, 00:06:59.505 { 00:06:59.505 "method": "bdev_wait_for_examine" 00:06:59.505 } 00:06:59.505 ] 00:06:59.505 } 00:06:59.505 ] 00:06:59.505 } 00:06:59.505 [2024-07-15 07:11:08.438423] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:06:59.505 [2024-07-15 07:11:08.438933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63603 ] 00:06:59.764 [2024-07-15 07:11:08.573178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.764 [2024-07-15 07:11:08.632050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.764 [2024-07-15 07:11:08.663864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.141  Copying: 62/64 [MB] (62 MBps) Copying: 64/64 [MB] (average 62 MBps) 00:07:01.141 00:07:01.141 00:07:01.141 real 0m1.666s 00:07:01.141 user 0m1.475s 00:07:01.141 sys 0m1.293s 00:07:01.141 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.141 ************************************ 00:07:01.141 END TEST dd_copy_to_out_bdev 00:07:01.141 ************************************ 00:07:01.141 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:01.421 ************************************ 00:07:01.421 START TEST dd_offset_magic 00:07:01.421 ************************************ 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:01.421 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:01.421 [2024-07-15 07:11:10.161601] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:01.421 [2024-07-15 07:11:10.161701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63649 ] 00:07:01.421 { 00:07:01.421 "subsystems": [ 00:07:01.421 { 00:07:01.421 "subsystem": "bdev", 00:07:01.421 "config": [ 00:07:01.421 { 00:07:01.421 "params": { 00:07:01.421 "trtype": "pcie", 00:07:01.421 "traddr": "0000:00:10.0", 00:07:01.421 "name": "Nvme0" 00:07:01.421 }, 00:07:01.421 "method": "bdev_nvme_attach_controller" 00:07:01.421 }, 00:07:01.421 { 00:07:01.421 "params": { 00:07:01.421 "trtype": "pcie", 00:07:01.421 "traddr": "0000:00:11.0", 00:07:01.421 "name": "Nvme1" 00:07:01.421 }, 00:07:01.421 "method": "bdev_nvme_attach_controller" 00:07:01.421 }, 00:07:01.421 { 00:07:01.421 "method": "bdev_wait_for_examine" 00:07:01.421 } 00:07:01.421 ] 00:07:01.421 } 00:07:01.421 ] 00:07:01.421 } 00:07:01.421 [2024-07-15 07:11:10.298579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.421 [2024-07-15 07:11:10.361789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.687 [2024-07-15 07:11:10.394677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.945  Copying: 65/65 [MB] (average 1031 MBps) 00:07:01.945 00:07:01.945 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:01.945 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:01.945 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:01.945 07:11:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:01.945 [2024-07-15 07:11:10.863154] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:01.945 [2024-07-15 07:11:10.863247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63669 ] 00:07:01.945 { 00:07:01.945 "subsystems": [ 00:07:01.945 { 00:07:01.945 "subsystem": "bdev", 00:07:01.945 "config": [ 00:07:01.945 { 00:07:01.945 "params": { 00:07:01.945 "trtype": "pcie", 00:07:01.945 "traddr": "0000:00:10.0", 00:07:01.945 "name": "Nvme0" 00:07:01.945 }, 00:07:01.945 "method": "bdev_nvme_attach_controller" 00:07:01.945 }, 00:07:01.945 { 00:07:01.945 "params": { 00:07:01.945 "trtype": "pcie", 00:07:01.945 "traddr": "0000:00:11.0", 00:07:01.945 "name": "Nvme1" 00:07:01.945 }, 00:07:01.945 "method": "bdev_nvme_attach_controller" 00:07:01.945 }, 00:07:01.945 { 00:07:01.945 "method": "bdev_wait_for_examine" 00:07:01.945 } 00:07:01.945 ] 00:07:01.945 } 00:07:01.946 ] 00:07:01.946 } 00:07:02.204 [2024-07-15 07:11:10.996802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.204 [2024-07-15 07:11:11.052840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.204 [2024-07-15 07:11:11.081823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.464  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:02.464 00:07:02.464 07:11:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:02.464 07:11:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:02.464 07:11:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:02.464 07:11:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:02.464 07:11:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:02.464 07:11:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:02.464 07:11:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:02.723 [2024-07-15 07:11:11.450613] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:02.723 [2024-07-15 07:11:11.450919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63680 ] 00:07:02.723 { 00:07:02.723 "subsystems": [ 00:07:02.723 { 00:07:02.723 "subsystem": "bdev", 00:07:02.723 "config": [ 00:07:02.723 { 00:07:02.723 "params": { 00:07:02.723 "trtype": "pcie", 00:07:02.723 "traddr": "0000:00:10.0", 00:07:02.723 "name": "Nvme0" 00:07:02.723 }, 00:07:02.723 "method": "bdev_nvme_attach_controller" 00:07:02.723 }, 00:07:02.723 { 00:07:02.723 "params": { 00:07:02.723 "trtype": "pcie", 00:07:02.723 "traddr": "0000:00:11.0", 00:07:02.723 "name": "Nvme1" 00:07:02.723 }, 00:07:02.723 "method": "bdev_nvme_attach_controller" 00:07:02.723 }, 00:07:02.723 { 00:07:02.723 "method": "bdev_wait_for_examine" 00:07:02.723 } 00:07:02.723 ] 00:07:02.723 } 00:07:02.723 ] 00:07:02.723 } 00:07:02.723 [2024-07-15 07:11:11.589437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.723 [2024-07-15 07:11:11.649757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.981 [2024-07-15 07:11:11.681058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.268  Copying: 65/65 [MB] (average 1083 MBps) 00:07:03.268 00:07:03.268 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:03.268 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:03.268 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:03.268 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:03.268 [2024-07-15 07:11:12.159157] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:03.268 [2024-07-15 07:11:12.159262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63700 ] 00:07:03.268 { 00:07:03.268 "subsystems": [ 00:07:03.268 { 00:07:03.268 "subsystem": "bdev", 00:07:03.268 "config": [ 00:07:03.268 { 00:07:03.268 "params": { 00:07:03.268 "trtype": "pcie", 00:07:03.268 "traddr": "0000:00:10.0", 00:07:03.268 "name": "Nvme0" 00:07:03.268 }, 00:07:03.268 "method": "bdev_nvme_attach_controller" 00:07:03.268 }, 00:07:03.268 { 00:07:03.268 "params": { 00:07:03.268 "trtype": "pcie", 00:07:03.268 "traddr": "0000:00:11.0", 00:07:03.268 "name": "Nvme1" 00:07:03.268 }, 00:07:03.268 "method": "bdev_nvme_attach_controller" 00:07:03.268 }, 00:07:03.268 { 00:07:03.268 "method": "bdev_wait_for_examine" 00:07:03.268 } 00:07:03.268 ] 00:07:03.268 } 00:07:03.268 ] 00:07:03.268 } 00:07:03.527 [2024-07-15 07:11:12.298798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.527 [2024-07-15 07:11:12.360973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.527 [2024-07-15 07:11:12.392487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.785  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:03.785 00:07:03.785 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:03.785 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:03.785 00:07:03.785 real 0m2.614s 00:07:03.785 user 0m1.957s 00:07:03.785 sys 0m0.648s 00:07:03.785 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.785 ************************************ 00:07:03.785 END TEST dd_offset_magic 00:07:03.785 ************************************ 00:07:03.785 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:04.043 07:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.043 [2024-07-15 07:11:12.815755] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:04.043 [2024-07-15 07:11:12.815871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63737 ] 00:07:04.043 { 00:07:04.043 "subsystems": [ 00:07:04.043 { 00:07:04.043 "subsystem": "bdev", 00:07:04.043 "config": [ 00:07:04.043 { 00:07:04.043 "params": { 00:07:04.043 "trtype": "pcie", 00:07:04.043 "traddr": "0000:00:10.0", 00:07:04.043 "name": "Nvme0" 00:07:04.043 }, 00:07:04.043 "method": "bdev_nvme_attach_controller" 00:07:04.043 }, 00:07:04.043 { 00:07:04.043 "params": { 00:07:04.043 "trtype": "pcie", 00:07:04.043 "traddr": "0000:00:11.0", 00:07:04.043 "name": "Nvme1" 00:07:04.043 }, 00:07:04.043 "method": "bdev_nvme_attach_controller" 00:07:04.043 }, 00:07:04.043 { 00:07:04.043 "method": "bdev_wait_for_examine" 00:07:04.043 } 00:07:04.043 ] 00:07:04.043 } 00:07:04.043 ] 00:07:04.043 } 00:07:04.043 [2024-07-15 07:11:12.953872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.301 [2024-07-15 07:11:13.014683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.301 [2024-07-15 07:11:13.045487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.558  Copying: 5120/5120 [kB] (average 1666 MBps) 00:07:04.558 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:04.558 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.558 [2024-07-15 07:11:13.415247] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:04.558 [2024-07-15 07:11:13.415350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63747 ] 00:07:04.558 { 00:07:04.558 "subsystems": [ 00:07:04.558 { 00:07:04.558 "subsystem": "bdev", 00:07:04.558 "config": [ 00:07:04.558 { 00:07:04.558 "params": { 00:07:04.558 "trtype": "pcie", 00:07:04.558 "traddr": "0000:00:10.0", 00:07:04.558 "name": "Nvme0" 00:07:04.558 }, 00:07:04.558 "method": "bdev_nvme_attach_controller" 00:07:04.558 }, 00:07:04.558 { 00:07:04.558 "params": { 00:07:04.558 "trtype": "pcie", 00:07:04.558 "traddr": "0000:00:11.0", 00:07:04.558 "name": "Nvme1" 00:07:04.558 }, 00:07:04.558 "method": "bdev_nvme_attach_controller" 00:07:04.558 }, 00:07:04.558 { 00:07:04.558 "method": "bdev_wait_for_examine" 00:07:04.558 } 00:07:04.558 ] 00:07:04.558 } 00:07:04.558 ] 00:07:04.558 } 00:07:04.862 [2024-07-15 07:11:13.553890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.862 [2024-07-15 07:11:13.612198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.862 [2024-07-15 07:11:13.643166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.120  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:05.120 00:07:05.120 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:05.120 00:07:05.120 real 0m6.244s 00:07:05.120 user 0m4.710s 00:07:05.120 sys 0m2.684s 00:07:05.120 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.120 07:11:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:05.120 ************************************ 00:07:05.121 END TEST spdk_dd_bdev_to_bdev 00:07:05.121 ************************************ 00:07:05.121 07:11:14 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:05.121 07:11:14 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:05.121 07:11:14 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:05.121 07:11:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.121 07:11:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.121 07:11:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.121 ************************************ 00:07:05.121 START TEST spdk_dd_uring 00:07:05.121 ************************************ 00:07:05.121 07:11:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:05.380 * Looking for test storage... 00:07:05.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:05.380 ************************************ 00:07:05.380 START TEST dd_uring_copy 00:07:05.380 ************************************ 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:05.380 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=n4rjcroiexja2q2zz08iykmct7r27qkbcty894udqr49t0h4edwsns4w1n1pynsvf1kw00e2d55h1b3hmbvu4s9e7q9ugp4dslz0kged4nwbcawjqpittmcc19ectn42jbdf2lpcai8g1qzm5ywfd3jix9m84xb05mq26mua9yu9bnj18rik03qnq6jz1siwp220foki42x4cqay2le2cb2mc1vveb4qrf798toxcripwn65tjkd62aldjlovvatok090b44smkypqxixbhtw2b10thwgvh5937klsxzh7glo6mltu18s0qefobjz7zgdn6e7o7onxmgrnp2jm5zz8nlfgjix6fs1njcb194b1rraoj0v4h5df25t21i33s34b4xxnhcrtf2zbr98xis4wi1ratdqh7dzaie6mfqef738ci9q8otcgq42xgkla9avmirfl8xrdcmuej1aotijv6kkvczg6847y672yrdijrlcrtyihcv9r8qg5zc8yezn01spa5133qbqx0gy2o5gu7cnr084fv9ofq4dzybr46dzqjg4azyat588y3wlmqfpbr9zhzom5gmrd0r5ni1zbjm8hskl3n91f0egqtr1tfi76104shvcjanzl8hcjs2a1usmgda81titysxwypyua672h05pwcs85ocy809c0zyylascktc0wvaue1za8calnjm20lm4koj40144s467mhfszsrinu04v9r0uwhlo41o8zna70qymapfpuw0slywfshj5tiwjpwu5soklmo01rjzc563wbj3d54hx9s7gzo2e9veqkz9i9c8p1333il8yuz5xmo7sdcyogtt3rhirkss2bffmxjkphsyc6xx2kynzlmlyyqio5wmx55e8zuh76qqgd5ykxj9b5jwbygythmucht1kqsje5vb3fwadzxcb9zbncyyaqdbckcp75jux611fxliuanivt6yunmav2rnvkdqfwm9oeyme9qdpkm80z6qh93hkkauw9hapnk 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo n4rjcroiexja2q2zz08iykmct7r27qkbcty894udqr49t0h4edwsns4w1n1pynsvf1kw00e2d55h1b3hmbvu4s9e7q9ugp4dslz0kged4nwbcawjqpittmcc19ectn42jbdf2lpcai8g1qzm5ywfd3jix9m84xb05mq26mua9yu9bnj18rik03qnq6jz1siwp220foki42x4cqay2le2cb2mc1vveb4qrf798toxcripwn65tjkd62aldjlovvatok090b44smkypqxixbhtw2b10thwgvh5937klsxzh7glo6mltu18s0qefobjz7zgdn6e7o7onxmgrnp2jm5zz8nlfgjix6fs1njcb194b1rraoj0v4h5df25t21i33s34b4xxnhcrtf2zbr98xis4wi1ratdqh7dzaie6mfqef738ci9q8otcgq42xgkla9avmirfl8xrdcmuej1aotijv6kkvczg6847y672yrdijrlcrtyihcv9r8qg5zc8yezn01spa5133qbqx0gy2o5gu7cnr084fv9ofq4dzybr46dzqjg4azyat588y3wlmqfpbr9zhzom5gmrd0r5ni1zbjm8hskl3n91f0egqtr1tfi76104shvcjanzl8hcjs2a1usmgda81titysxwypyua672h05pwcs85ocy809c0zyylascktc0wvaue1za8calnjm20lm4koj40144s467mhfszsrinu04v9r0uwhlo41o8zna70qymapfpuw0slywfshj5tiwjpwu5soklmo01rjzc563wbj3d54hx9s7gzo2e9veqkz9i9c8p1333il8yuz5xmo7sdcyogtt3rhirkss2bffmxjkphsyc6xx2kynzlmlyyqio5wmx55e8zuh76qqgd5ykxj9b5jwbygythmucht1kqsje5vb3fwadzxcb9zbncyyaqdbckcp75jux611fxliuanivt6yunmav2rnvkdqfwm9oeyme9qdpkm80z6qh93hkkauw9hapnk 00:07:05.381 07:11:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:05.381 [2024-07-15 07:11:14.224831] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:05.381 [2024-07-15 07:11:14.224924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63817 ] 00:07:05.640 [2024-07-15 07:11:14.361320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.640 [2024-07-15 07:11:14.425754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.640 [2024-07-15 07:11:14.456880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.463  Copying: 511/511 [MB] (average 1434 MBps) 00:07:06.463 00:07:06.463 07:11:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:06.463 07:11:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:06.463 07:11:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:06.463 07:11:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.463 [2024-07-15 07:11:15.274715] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:06.463 [2024-07-15 07:11:15.274828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63833 ] 00:07:06.463 { 00:07:06.463 "subsystems": [ 00:07:06.463 { 00:07:06.463 "subsystem": "bdev", 00:07:06.463 "config": [ 00:07:06.463 { 00:07:06.463 "params": { 00:07:06.463 "block_size": 512, 00:07:06.463 "num_blocks": 1048576, 00:07:06.463 "name": "malloc0" 00:07:06.463 }, 00:07:06.463 "method": "bdev_malloc_create" 00:07:06.463 }, 00:07:06.463 { 00:07:06.463 "params": { 00:07:06.463 "filename": "/dev/zram1", 00:07:06.463 "name": "uring0" 00:07:06.463 }, 00:07:06.463 "method": "bdev_uring_create" 00:07:06.463 }, 00:07:06.463 { 00:07:06.463 "method": "bdev_wait_for_examine" 00:07:06.463 } 00:07:06.463 ] 00:07:06.463 } 00:07:06.463 ] 00:07:06.463 } 00:07:06.463 [2024-07-15 07:11:15.411201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.721 [2024-07-15 07:11:15.472214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.721 [2024-07-15 07:11:15.502505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.286  Copying: 221/512 [MB] (221 MBps) Copying: 444/512 [MB] (222 MBps) Copying: 512/512 [MB] (average 221 MBps) 00:07:09.286 00:07:09.286 07:11:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:09.286 07:11:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:09.286 07:11:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:09.286 07:11:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.545 [2024-07-15 07:11:18.251547] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:09.545 [2024-07-15 07:11:18.252249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63877 ] 00:07:09.545 { 00:07:09.545 "subsystems": [ 00:07:09.545 { 00:07:09.545 "subsystem": "bdev", 00:07:09.545 "config": [ 00:07:09.545 { 00:07:09.545 "params": { 00:07:09.545 "block_size": 512, 00:07:09.545 "num_blocks": 1048576, 00:07:09.545 "name": "malloc0" 00:07:09.545 }, 00:07:09.545 "method": "bdev_malloc_create" 00:07:09.545 }, 00:07:09.545 { 00:07:09.545 "params": { 00:07:09.545 "filename": "/dev/zram1", 00:07:09.545 "name": "uring0" 00:07:09.545 }, 00:07:09.545 "method": "bdev_uring_create" 00:07:09.545 }, 00:07:09.545 { 00:07:09.545 "method": "bdev_wait_for_examine" 00:07:09.545 } 00:07:09.545 ] 00:07:09.545 } 00:07:09.545 ] 00:07:09.545 } 00:07:09.545 [2024-07-15 07:11:18.397176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.545 [2024-07-15 07:11:18.451768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.545 [2024-07-15 07:11:18.482650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.111  Copying: 175/512 [MB] (175 MBps) Copying: 342/512 [MB] (166 MBps) Copying: 505/512 [MB] (163 MBps) Copying: 512/512 [MB] (average 168 MBps) 00:07:13.111 00:07:13.111 07:11:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:13.111 07:11:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ n4rjcroiexja2q2zz08iykmct7r27qkbcty894udqr49t0h4edwsns4w1n1pynsvf1kw00e2d55h1b3hmbvu4s9e7q9ugp4dslz0kged4nwbcawjqpittmcc19ectn42jbdf2lpcai8g1qzm5ywfd3jix9m84xb05mq26mua9yu9bnj18rik03qnq6jz1siwp220foki42x4cqay2le2cb2mc1vveb4qrf798toxcripwn65tjkd62aldjlovvatok090b44smkypqxixbhtw2b10thwgvh5937klsxzh7glo6mltu18s0qefobjz7zgdn6e7o7onxmgrnp2jm5zz8nlfgjix6fs1njcb194b1rraoj0v4h5df25t21i33s34b4xxnhcrtf2zbr98xis4wi1ratdqh7dzaie6mfqef738ci9q8otcgq42xgkla9avmirfl8xrdcmuej1aotijv6kkvczg6847y672yrdijrlcrtyihcv9r8qg5zc8yezn01spa5133qbqx0gy2o5gu7cnr084fv9ofq4dzybr46dzqjg4azyat588y3wlmqfpbr9zhzom5gmrd0r5ni1zbjm8hskl3n91f0egqtr1tfi76104shvcjanzl8hcjs2a1usmgda81titysxwypyua672h05pwcs85ocy809c0zyylascktc0wvaue1za8calnjm20lm4koj40144s467mhfszsrinu04v9r0uwhlo41o8zna70qymapfpuw0slywfshj5tiwjpwu5soklmo01rjzc563wbj3d54hx9s7gzo2e9veqkz9i9c8p1333il8yuz5xmo7sdcyogtt3rhirkss2bffmxjkphsyc6xx2kynzlmlyyqio5wmx55e8zuh76qqgd5ykxj9b5jwbygythmucht1kqsje5vb3fwadzxcb9zbncyyaqdbckcp75jux611fxliuanivt6yunmav2rnvkdqfwm9oeyme9qdpkm80z6qh93hkkauw9hapnk == \n\4\r\j\c\r\o\i\e\x\j\a\2\q\2\z\z\0\8\i\y\k\m\c\t\7\r\2\7\q\k\b\c\t\y\8\9\4\u\d\q\r\4\9\t\0\h\4\e\d\w\s\n\s\4\w\1\n\1\p\y\n\s\v\f\1\k\w\0\0\e\2\d\5\5\h\1\b\3\h\m\b\v\u\4\s\9\e\7\q\9\u\g\p\4\d\s\l\z\0\k\g\e\d\4\n\w\b\c\a\w\j\q\p\i\t\t\m\c\c\1\9\e\c\t\n\4\2\j\b\d\f\2\l\p\c\a\i\8\g\1\q\z\m\5\y\w\f\d\3\j\i\x\9\m\8\4\x\b\0\5\m\q\2\6\m\u\a\9\y\u\9\b\n\j\1\8\r\i\k\0\3\q\n\q\6\j\z\1\s\i\w\p\2\2\0\f\o\k\i\4\2\x\4\c\q\a\y\2\l\e\2\c\b\2\m\c\1\v\v\e\b\4\q\r\f\7\9\8\t\o\x\c\r\i\p\w\n\6\5\t\j\k\d\6\2\a\l\d\j\l\o\v\v\a\t\o\k\0\9\0\b\4\4\s\m\k\y\p\q\x\i\x\b\h\t\w\2\b\1\0\t\h\w\g\v\h\5\9\3\7\k\l\s\x\z\h\7\g\l\o\6\m\l\t\u\1\8\s\0\q\e\f\o\b\j\z\7\z\g\d\n\6\e\7\o\7\o\n\x\m\g\r\n\p\2\j\m\5\z\z\8\n\l\f\g\j\i\x\6\f\s\1\n\j\c\b\1\9\4\b\1\r\r\a\o\j\0\v\4\h\5\d\f\2\5\t\2\1\i\3\3\s\3\4\b\4\x\x\n\h\c\r\t\f\2\z\b\r\9\8\x\i\s\4\w\i\1\r\a\t\d\q\h\7\d\z\a\i\e\6\m\f\q\e\f\7\3\8\c\i\9\q\8\o\t\c\g\q\4\2\x\g\k\l\a\9\a\v\m\i\r\f\l\8\x\r\d\c\m\u\e\j\1\a\o\t\i\j\v\6\k\k\v\c\z\g\6\8\4\7\y\6\7\2\y\r\d\i\j\r\l\c\r\t\y\i\h\c\v\9\r\8\q\g\5\z\c\8\y\e\z\n\0\1\s\p\a\5\1\3\3\q\b\q\x\0\g\y\2\o\5\g\u\7\c\n\r\0\8\4\f\v\9\o\f\q\4\d\z\y\b\r\4\6\d\z\q\j\g\4\a\z\y\a\t\5\8\8\y\3\w\l\m\q\f\p\b\r\9\z\h\z\o\m\5\g\m\r\d\0\r\5\n\i\1\z\b\j\m\8\h\s\k\l\3\n\9\1\f\0\e\g\q\t\r\1\t\f\i\7\6\1\0\4\s\h\v\c\j\a\n\z\l\8\h\c\j\s\2\a\1\u\s\m\g\d\a\8\1\t\i\t\y\s\x\w\y\p\y\u\a\6\7\2\h\0\5\p\w\c\s\8\5\o\c\y\8\0\9\c\0\z\y\y\l\a\s\c\k\t\c\0\w\v\a\u\e\1\z\a\8\c\a\l\n\j\m\2\0\l\m\4\k\o\j\4\0\1\4\4\s\4\6\7\m\h\f\s\z\s\r\i\n\u\0\4\v\9\r\0\u\w\h\l\o\4\1\o\8\z\n\a\7\0\q\y\m\a\p\f\p\u\w\0\s\l\y\w\f\s\h\j\5\t\i\w\j\p\w\u\5\s\o\k\l\m\o\0\1\r\j\z\c\5\6\3\w\b\j\3\d\5\4\h\x\9\s\7\g\z\o\2\e\9\v\e\q\k\z\9\i\9\c\8\p\1\3\3\3\i\l\8\y\u\z\5\x\m\o\7\s\d\c\y\o\g\t\t\3\r\h\i\r\k\s\s\2\b\f\f\m\x\j\k\p\h\s\y\c\6\x\x\2\k\y\n\z\l\m\l\y\y\q\i\o\5\w\m\x\5\5\e\8\z\u\h\7\6\q\q\g\d\5\y\k\x\j\9\b\5\j\w\b\y\g\y\t\h\m\u\c\h\t\1\k\q\s\j\e\5\v\b\3\f\w\a\d\z\x\c\b\9\z\b\n\c\y\y\a\q\d\b\c\k\c\p\7\5\j\u\x\6\1\1\f\x\l\i\u\a\n\i\v\t\6\y\u\n\m\a\v\2\r\n\v\k\d\q\f\w\m\9\o\e\y\m\e\9\q\d\p\k\m\8\0\z\6\q\h\9\3\h\k\k\a\u\w\9\h\a\p\n\k ]] 00:07:13.111 07:11:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:13.111 07:11:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ n4rjcroiexja2q2zz08iykmct7r27qkbcty894udqr49t0h4edwsns4w1n1pynsvf1kw00e2d55h1b3hmbvu4s9e7q9ugp4dslz0kged4nwbcawjqpittmcc19ectn42jbdf2lpcai8g1qzm5ywfd3jix9m84xb05mq26mua9yu9bnj18rik03qnq6jz1siwp220foki42x4cqay2le2cb2mc1vveb4qrf798toxcripwn65tjkd62aldjlovvatok090b44smkypqxixbhtw2b10thwgvh5937klsxzh7glo6mltu18s0qefobjz7zgdn6e7o7onxmgrnp2jm5zz8nlfgjix6fs1njcb194b1rraoj0v4h5df25t21i33s34b4xxnhcrtf2zbr98xis4wi1ratdqh7dzaie6mfqef738ci9q8otcgq42xgkla9avmirfl8xrdcmuej1aotijv6kkvczg6847y672yrdijrlcrtyihcv9r8qg5zc8yezn01spa5133qbqx0gy2o5gu7cnr084fv9ofq4dzybr46dzqjg4azyat588y3wlmqfpbr9zhzom5gmrd0r5ni1zbjm8hskl3n91f0egqtr1tfi76104shvcjanzl8hcjs2a1usmgda81titysxwypyua672h05pwcs85ocy809c0zyylascktc0wvaue1za8calnjm20lm4koj40144s467mhfszsrinu04v9r0uwhlo41o8zna70qymapfpuw0slywfshj5tiwjpwu5soklmo01rjzc563wbj3d54hx9s7gzo2e9veqkz9i9c8p1333il8yuz5xmo7sdcyogtt3rhirkss2bffmxjkphsyc6xx2kynzlmlyyqio5wmx55e8zuh76qqgd5ykxj9b5jwbygythmucht1kqsje5vb3fwadzxcb9zbncyyaqdbckcp75jux611fxliuanivt6yunmav2rnvkdqfwm9oeyme9qdpkm80z6qh93hkkauw9hapnk == \n\4\r\j\c\r\o\i\e\x\j\a\2\q\2\z\z\0\8\i\y\k\m\c\t\7\r\2\7\q\k\b\c\t\y\8\9\4\u\d\q\r\4\9\t\0\h\4\e\d\w\s\n\s\4\w\1\n\1\p\y\n\s\v\f\1\k\w\0\0\e\2\d\5\5\h\1\b\3\h\m\b\v\u\4\s\9\e\7\q\9\u\g\p\4\d\s\l\z\0\k\g\e\d\4\n\w\b\c\a\w\j\q\p\i\t\t\m\c\c\1\9\e\c\t\n\4\2\j\b\d\f\2\l\p\c\a\i\8\g\1\q\z\m\5\y\w\f\d\3\j\i\x\9\m\8\4\x\b\0\5\m\q\2\6\m\u\a\9\y\u\9\b\n\j\1\8\r\i\k\0\3\q\n\q\6\j\z\1\s\i\w\p\2\2\0\f\o\k\i\4\2\x\4\c\q\a\y\2\l\e\2\c\b\2\m\c\1\v\v\e\b\4\q\r\f\7\9\8\t\o\x\c\r\i\p\w\n\6\5\t\j\k\d\6\2\a\l\d\j\l\o\v\v\a\t\o\k\0\9\0\b\4\4\s\m\k\y\p\q\x\i\x\b\h\t\w\2\b\1\0\t\h\w\g\v\h\5\9\3\7\k\l\s\x\z\h\7\g\l\o\6\m\l\t\u\1\8\s\0\q\e\f\o\b\j\z\7\z\g\d\n\6\e\7\o\7\o\n\x\m\g\r\n\p\2\j\m\5\z\z\8\n\l\f\g\j\i\x\6\f\s\1\n\j\c\b\1\9\4\b\1\r\r\a\o\j\0\v\4\h\5\d\f\2\5\t\2\1\i\3\3\s\3\4\b\4\x\x\n\h\c\r\t\f\2\z\b\r\9\8\x\i\s\4\w\i\1\r\a\t\d\q\h\7\d\z\a\i\e\6\m\f\q\e\f\7\3\8\c\i\9\q\8\o\t\c\g\q\4\2\x\g\k\l\a\9\a\v\m\i\r\f\l\8\x\r\d\c\m\u\e\j\1\a\o\t\i\j\v\6\k\k\v\c\z\g\6\8\4\7\y\6\7\2\y\r\d\i\j\r\l\c\r\t\y\i\h\c\v\9\r\8\q\g\5\z\c\8\y\e\z\n\0\1\s\p\a\5\1\3\3\q\b\q\x\0\g\y\2\o\5\g\u\7\c\n\r\0\8\4\f\v\9\o\f\q\4\d\z\y\b\r\4\6\d\z\q\j\g\4\a\z\y\a\t\5\8\8\y\3\w\l\m\q\f\p\b\r\9\z\h\z\o\m\5\g\m\r\d\0\r\5\n\i\1\z\b\j\m\8\h\s\k\l\3\n\9\1\f\0\e\g\q\t\r\1\t\f\i\7\6\1\0\4\s\h\v\c\j\a\n\z\l\8\h\c\j\s\2\a\1\u\s\m\g\d\a\8\1\t\i\t\y\s\x\w\y\p\y\u\a\6\7\2\h\0\5\p\w\c\s\8\5\o\c\y\8\0\9\c\0\z\y\y\l\a\s\c\k\t\c\0\w\v\a\u\e\1\z\a\8\c\a\l\n\j\m\2\0\l\m\4\k\o\j\4\0\1\4\4\s\4\6\7\m\h\f\s\z\s\r\i\n\u\0\4\v\9\r\0\u\w\h\l\o\4\1\o\8\z\n\a\7\0\q\y\m\a\p\f\p\u\w\0\s\l\y\w\f\s\h\j\5\t\i\w\j\p\w\u\5\s\o\k\l\m\o\0\1\r\j\z\c\5\6\3\w\b\j\3\d\5\4\h\x\9\s\7\g\z\o\2\e\9\v\e\q\k\z\9\i\9\c\8\p\1\3\3\3\i\l\8\y\u\z\5\x\m\o\7\s\d\c\y\o\g\t\t\3\r\h\i\r\k\s\s\2\b\f\f\m\x\j\k\p\h\s\y\c\6\x\x\2\k\y\n\z\l\m\l\y\y\q\i\o\5\w\m\x\5\5\e\8\z\u\h\7\6\q\q\g\d\5\y\k\x\j\9\b\5\j\w\b\y\g\y\t\h\m\u\c\h\t\1\k\q\s\j\e\5\v\b\3\f\w\a\d\z\x\c\b\9\z\b\n\c\y\y\a\q\d\b\c\k\c\p\7\5\j\u\x\6\1\1\f\x\l\i\u\a\n\i\v\t\6\y\u\n\m\a\v\2\r\n\v\k\d\q\f\w\m\9\o\e\y\m\e\9\q\d\p\k\m\8\0\z\6\q\h\9\3\h\k\k\a\u\w\9\h\a\p\n\k ]] 00:07:13.111 07:11:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:13.369 07:11:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:13.369 07:11:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:13.370 07:11:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.370 07:11:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.628 [2024-07-15 07:11:22.333480] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:13.628 [2024-07-15 07:11:22.333555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63950 ] 00:07:13.628 { 00:07:13.628 "subsystems": [ 00:07:13.628 { 00:07:13.628 "subsystem": "bdev", 00:07:13.628 "config": [ 00:07:13.628 { 00:07:13.628 "params": { 00:07:13.628 "block_size": 512, 00:07:13.628 "num_blocks": 1048576, 00:07:13.628 "name": "malloc0" 00:07:13.628 }, 00:07:13.628 "method": "bdev_malloc_create" 00:07:13.628 }, 00:07:13.628 { 00:07:13.628 "params": { 00:07:13.628 "filename": "/dev/zram1", 00:07:13.628 "name": "uring0" 00:07:13.628 }, 00:07:13.628 "method": "bdev_uring_create" 00:07:13.628 }, 00:07:13.628 { 00:07:13.628 "method": "bdev_wait_for_examine" 00:07:13.628 } 00:07:13.628 ] 00:07:13.628 } 00:07:13.628 ] 00:07:13.628 } 00:07:13.628 [2024-07-15 07:11:22.466954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.628 [2024-07-15 07:11:22.523131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.628 [2024-07-15 07:11:22.552163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.700  Copying: 151/512 [MB] (151 MBps) Copying: 297/512 [MB] (146 MBps) Copying: 440/512 [MB] (143 MBps) Copying: 512/512 [MB] (average 146 MBps) 00:07:17.700 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:17.700 07:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.700 [2024-07-15 07:11:26.489448] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:17.700 [2024-07-15 07:11:26.489570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64002 ] 00:07:17.700 { 00:07:17.700 "subsystems": [ 00:07:17.700 { 00:07:17.700 "subsystem": "bdev", 00:07:17.700 "config": [ 00:07:17.700 { 00:07:17.700 "params": { 00:07:17.700 "block_size": 512, 00:07:17.700 "num_blocks": 1048576, 00:07:17.700 "name": "malloc0" 00:07:17.700 }, 00:07:17.700 "method": "bdev_malloc_create" 00:07:17.700 }, 00:07:17.700 { 00:07:17.700 "params": { 00:07:17.700 "filename": "/dev/zram1", 00:07:17.700 "name": "uring0" 00:07:17.700 }, 00:07:17.700 "method": "bdev_uring_create" 00:07:17.700 }, 00:07:17.700 { 00:07:17.700 "params": { 00:07:17.700 "name": "uring0" 00:07:17.700 }, 00:07:17.700 "method": "bdev_uring_delete" 00:07:17.700 }, 00:07:17.700 { 00:07:17.700 "method": "bdev_wait_for_examine" 00:07:17.700 } 00:07:17.700 ] 00:07:17.700 } 00:07:17.700 ] 00:07:17.700 } 00:07:17.700 [2024-07-15 07:11:26.627674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.959 [2024-07-15 07:11:26.688606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.959 [2024-07-15 07:11:26.726213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.218  Copying: 0/0 [B] (average 0 Bps) 00:07:18.218 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.218 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:18.218 { 00:07:18.218 "subsystems": [ 00:07:18.218 { 00:07:18.218 "subsystem": "bdev", 00:07:18.218 "config": [ 00:07:18.218 { 00:07:18.218 "params": { 00:07:18.218 "block_size": 512, 00:07:18.218 "num_blocks": 1048576, 00:07:18.218 "name": "malloc0" 00:07:18.218 }, 00:07:18.218 "method": "bdev_malloc_create" 00:07:18.218 }, 00:07:18.218 { 00:07:18.218 "params": { 00:07:18.218 "filename": "/dev/zram1", 00:07:18.218 "name": "uring0" 00:07:18.218 }, 00:07:18.218 "method": "bdev_uring_create" 00:07:18.218 }, 00:07:18.218 { 00:07:18.218 "params": { 00:07:18.218 "name": "uring0" 00:07:18.218 }, 00:07:18.218 "method": "bdev_uring_delete" 00:07:18.218 }, 00:07:18.218 { 00:07:18.218 "method": "bdev_wait_for_examine" 00:07:18.218 } 00:07:18.218 ] 00:07:18.218 } 00:07:18.218 ] 00:07:18.218 } 00:07:18.218 [2024-07-15 07:11:27.163168] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:18.218 [2024-07-15 07:11:27.163243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64033 ] 00:07:18.477 [2024-07-15 07:11:27.294641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.477 [2024-07-15 07:11:27.356963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.477 [2024-07-15 07:11:27.387944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.753 [2024-07-15 07:11:27.519609] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:18.753 [2024-07-15 07:11:27.519656] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:18.753 [2024-07-15 07:11:27.519667] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:18.753 [2024-07-15 07:11:27.519677] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.753 [2024-07-15 07:11:27.696467] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:19.011 07:11:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:19.269 00:07:19.269 real 0m13.926s 00:07:19.269 user 0m9.658s 00:07:19.269 sys 0m11.661s 00:07:19.269 ************************************ 00:07:19.269 END TEST dd_uring_copy 00:07:19.269 ************************************ 00:07:19.269 07:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.269 07:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.269 07:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:19.269 00:07:19.269 real 0m14.069s 00:07:19.269 user 0m9.718s 00:07:19.269 sys 0m11.744s 00:07:19.269 07:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.269 07:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:19.269 ************************************ 00:07:19.269 END TEST spdk_dd_uring 00:07:19.269 ************************************ 00:07:19.269 07:11:28 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:19.269 07:11:28 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:19.269 07:11:28 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.269 07:11:28 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.269 07:11:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:19.269 ************************************ 00:07:19.269 START TEST spdk_dd_sparse 00:07:19.269 ************************************ 00:07:19.269 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:19.528 * Looking for test storage... 00:07:19.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:19.528 1+0 records in 00:07:19.528 1+0 records out 00:07:19.528 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00589172 s, 712 MB/s 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:19.528 1+0 records in 00:07:19.528 1+0 records out 00:07:19.528 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00586596 s, 715 MB/s 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:19.528 1+0 records in 00:07:19.528 1+0 records out 00:07:19.528 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00530479 s, 791 MB/s 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:19.528 ************************************ 00:07:19.528 START TEST dd_sparse_file_to_file 00:07:19.528 ************************************ 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:19.528 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:19.528 [2024-07-15 07:11:28.344874] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:19.528 [2024-07-15 07:11:28.344983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64119 ] 00:07:19.528 { 00:07:19.528 "subsystems": [ 00:07:19.528 { 00:07:19.528 "subsystem": "bdev", 00:07:19.528 "config": [ 00:07:19.528 { 00:07:19.528 "params": { 00:07:19.528 "block_size": 4096, 00:07:19.528 "filename": "dd_sparse_aio_disk", 00:07:19.528 "name": "dd_aio" 00:07:19.528 }, 00:07:19.528 "method": "bdev_aio_create" 00:07:19.528 }, 00:07:19.528 { 00:07:19.528 "params": { 00:07:19.528 "lvs_name": "dd_lvstore", 00:07:19.528 "bdev_name": "dd_aio" 00:07:19.528 }, 00:07:19.528 "method": "bdev_lvol_create_lvstore" 00:07:19.528 }, 00:07:19.528 { 00:07:19.528 "method": "bdev_wait_for_examine" 00:07:19.528 } 00:07:19.528 ] 00:07:19.528 } 00:07:19.528 ] 00:07:19.528 } 00:07:19.786 [2024-07-15 07:11:28.485011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.787 [2024-07-15 07:11:28.558087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.787 [2024-07-15 07:11:28.592562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.046  Copying: 12/36 [MB] (average 1090 MBps) 00:07:20.046 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:20.046 00:07:20.046 real 0m0.617s 00:07:20.046 user 0m0.403s 00:07:20.046 sys 0m0.265s 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:20.046 ************************************ 00:07:20.046 END TEST dd_sparse_file_to_file 00:07:20.046 ************************************ 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:20.046 ************************************ 00:07:20.046 START TEST dd_sparse_file_to_bdev 00:07:20.046 ************************************ 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:20.046 07:11:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.306 [2024-07-15 07:11:29.015715] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:20.306 [2024-07-15 07:11:29.015815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64161 ] 00:07:20.306 { 00:07:20.306 "subsystems": [ 00:07:20.306 { 00:07:20.306 "subsystem": "bdev", 00:07:20.306 "config": [ 00:07:20.306 { 00:07:20.306 "params": { 00:07:20.306 "block_size": 4096, 00:07:20.306 "filename": "dd_sparse_aio_disk", 00:07:20.306 "name": "dd_aio" 00:07:20.306 }, 00:07:20.306 "method": "bdev_aio_create" 00:07:20.306 }, 00:07:20.306 { 00:07:20.306 "params": { 00:07:20.306 "lvs_name": "dd_lvstore", 00:07:20.306 "lvol_name": "dd_lvol", 00:07:20.306 "size_in_mib": 36, 00:07:20.306 "thin_provision": true 00:07:20.306 }, 00:07:20.306 "method": "bdev_lvol_create" 00:07:20.306 }, 00:07:20.306 { 00:07:20.306 "method": "bdev_wait_for_examine" 00:07:20.306 } 00:07:20.306 ] 00:07:20.306 } 00:07:20.306 ] 00:07:20.306 } 00:07:20.306 [2024-07-15 07:11:29.155721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.306 [2024-07-15 07:11:29.226760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.565 [2024-07-15 07:11:29.260083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.565  Copying: 12/36 [MB] (average 600 MBps) 00:07:20.565 00:07:20.824 00:07:20.824 real 0m0.561s 00:07:20.824 user 0m0.368s 00:07:20.824 sys 0m0.251s 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 ************************************ 00:07:20.824 END TEST dd_sparse_file_to_bdev 00:07:20.824 ************************************ 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 ************************************ 00:07:20.824 START TEST dd_sparse_bdev_to_file 00:07:20.824 ************************************ 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:20.824 07:11:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 [2024-07-15 07:11:29.631032] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:20.824 [2024-07-15 07:11:29.631152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64194 ] 00:07:20.824 { 00:07:20.824 "subsystems": [ 00:07:20.824 { 00:07:20.824 "subsystem": "bdev", 00:07:20.824 "config": [ 00:07:20.824 { 00:07:20.824 "params": { 00:07:20.824 "block_size": 4096, 00:07:20.824 "filename": "dd_sparse_aio_disk", 00:07:20.824 "name": "dd_aio" 00:07:20.824 }, 00:07:20.824 "method": "bdev_aio_create" 00:07:20.824 }, 00:07:20.824 { 00:07:20.824 "method": "bdev_wait_for_examine" 00:07:20.824 } 00:07:20.824 ] 00:07:20.824 } 00:07:20.824 ] 00:07:20.824 } 00:07:20.824 [2024-07-15 07:11:29.767528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.082 [2024-07-15 07:11:29.837313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.082 [2024-07-15 07:11:29.873274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.340  Copying: 12/36 [MB] (average 1090 MBps) 00:07:21.340 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:21.340 00:07:21.340 real 0m0.567s 00:07:21.340 user 0m0.365s 00:07:21.340 sys 0m0.256s 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:21.340 ************************************ 00:07:21.340 END TEST dd_sparse_bdev_to_file 00:07:21.340 ************************************ 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:21.340 00:07:21.340 real 0m2.048s 00:07:21.340 user 0m1.246s 00:07:21.340 sys 0m0.950s 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.340 07:11:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:21.340 ************************************ 00:07:21.340 END TEST spdk_dd_sparse 00:07:21.340 ************************************ 00:07:21.340 07:11:30 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:21.340 07:11:30 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:21.340 07:11:30 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.340 07:11:30 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.340 07:11:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:21.340 ************************************ 00:07:21.340 START TEST spdk_dd_negative 00:07:21.340 ************************************ 00:07:21.340 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:21.600 * Looking for test storage... 00:07:21.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.600 ************************************ 00:07:21.600 START TEST dd_invalid_arguments 00:07:21.600 ************************************ 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.600 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:21.600 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:21.600 00:07:21.600 CPU options: 00:07:21.600 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:21.600 (like [0,1,10]) 00:07:21.600 --lcores lcore to CPU mapping list. The list is in the format: 00:07:21.600 [<,lcores[@CPUs]>...] 00:07:21.600 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:21.600 Within the group, '-' is used for range separator, 00:07:21.600 ',' is used for single number separator. 00:07:21.600 '( )' can be omitted for single element group, 00:07:21.600 '@' can be omitted if cpus and lcores have the same value 00:07:21.600 --disable-cpumask-locks Disable CPU core lock files. 00:07:21.600 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:21.600 pollers in the app support interrupt mode) 00:07:21.600 -p, --main-core main (primary) core for DPDK 00:07:21.600 00:07:21.600 Configuration options: 00:07:21.600 -c, --config, --json JSON config file 00:07:21.600 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:21.600 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:21.600 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:21.600 --rpcs-allowed comma-separated list of permitted RPCS 00:07:21.600 --json-ignore-init-errors don't exit on invalid config entry 00:07:21.600 00:07:21.600 Memory options: 00:07:21.600 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:21.600 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:21.600 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:21.600 -R, --huge-unlink unlink huge files after initialization 00:07:21.600 -n, --mem-channels number of memory channels used for DPDK 00:07:21.600 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:21.600 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:21.600 --no-huge run without using hugepages 00:07:21.600 -i, --shm-id shared memory ID (optional) 00:07:21.600 -g, --single-file-segments force creating just one hugetlbfs file 00:07:21.600 00:07:21.600 PCI options: 00:07:21.600 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:21.600 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:21.600 -u, --no-pci disable PCI access 00:07:21.600 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:21.600 00:07:21.600 Log options: 00:07:21.600 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:21.600 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:21.600 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:21.600 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:21.600 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:21.600 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:21.600 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:21.601 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:21.601 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:21.601 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:21.601 virtio_vfio_user, vmd) 00:07:21.601 --silence-noticelog disable notice level logging to stderr 00:07:21.601 00:07:21.601 Trace options: 00:07:21.601 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:21.601 setting 0 to disable trace (default 32768) 00:07:21.601 Tracepoints vary in size and can use more than one trace entry. 00:07:21.601 -e, --tpoint-group [:] 00:07:21.601 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:21.601 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:21.601 [2024-07-15 07:11:30.421563] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:21.601 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:21.601 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:21.601 a tracepoint group. First tpoint inside a group can be enabled by 00:07:21.601 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:21.601 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:21.601 in /include/spdk_internal/trace_defs.h 00:07:21.601 00:07:21.601 Other options: 00:07:21.601 -h, --help show this usage 00:07:21.601 -v, --version print SPDK version 00:07:21.601 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:21.601 --env-context Opaque context for use of the env implementation 00:07:21.601 00:07:21.601 Application specific: 00:07:21.601 [--------- DD Options ---------] 00:07:21.601 --if Input file. Must specify either --if or --ib. 00:07:21.601 --ib Input bdev. Must specifier either --if or --ib 00:07:21.601 --of Output file. Must specify either --of or --ob. 00:07:21.601 --ob Output bdev. Must specify either --of or --ob. 00:07:21.601 --iflag Input file flags. 00:07:21.601 --oflag Output file flags. 00:07:21.601 --bs I/O unit size (default: 4096) 00:07:21.601 --qd Queue depth (default: 2) 00:07:21.601 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:21.601 --skip Skip this many I/O units at start of input. (default: 0) 00:07:21.601 --seek Skip this many I/O units at start of output. (default: 0) 00:07:21.601 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:21.601 --sparse Enable hole skipping in input target 00:07:21.601 Available iflag and oflag values: 00:07:21.601 append - append mode 00:07:21.601 direct - use direct I/O for data 00:07:21.601 directory - fail unless a directory 00:07:21.601 dsync - use synchronized I/O for data 00:07:21.601 noatime - do not update access time 00:07:21.601 noctty - do not assign controlling terminal from file 00:07:21.601 nofollow - do not follow symlinks 00:07:21.601 nonblock - use non-blocking I/O 00:07:21.601 sync - use synchronized I/O for data and metadata 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.601 00:07:21.601 real 0m0.076s 00:07:21.601 user 0m0.042s 00:07:21.601 sys 0m0.033s 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 ************************************ 00:07:21.601 END TEST dd_invalid_arguments 00:07:21.601 ************************************ 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 ************************************ 00:07:21.601 START TEST dd_double_input 00:07:21.601 ************************************ 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.601 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:21.601 [2024-07-15 07:11:30.547574] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.861 00:07:21.861 real 0m0.073s 00:07:21.861 user 0m0.043s 00:07:21.861 sys 0m0.029s 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:21.861 ************************************ 00:07:21.861 END TEST dd_double_input 00:07:21.861 ************************************ 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.861 ************************************ 00:07:21.861 START TEST dd_double_output 00:07:21.861 ************************************ 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:21.861 [2024-07-15 07:11:30.682904] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.861 00:07:21.861 real 0m0.078s 00:07:21.861 user 0m0.046s 00:07:21.861 sys 0m0.031s 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.861 ************************************ 00:07:21.861 END TEST dd_double_output 00:07:21.861 ************************************ 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.861 ************************************ 00:07:21.861 START TEST dd_no_input 00:07:21.861 ************************************ 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.861 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:22.121 [2024-07-15 07:11:30.813447] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.121 00:07:22.121 real 0m0.080s 00:07:22.121 user 0m0.045s 00:07:22.121 sys 0m0.033s 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.121 ************************************ 00:07:22.121 END TEST dd_no_input 00:07:22.121 ************************************ 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.121 ************************************ 00:07:22.121 START TEST dd_no_output 00:07:22.121 ************************************ 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.121 [2024-07-15 07:11:30.939299] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.121 00:07:22.121 real 0m0.076s 00:07:22.121 user 0m0.050s 00:07:22.121 sys 0m0.024s 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.121 ************************************ 00:07:22.121 END TEST dd_no_output 00:07:22.121 ************************************ 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.121 07:11:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.121 ************************************ 00:07:22.121 START TEST dd_wrong_blocksize 00:07:22.121 ************************************ 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.121 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:22.122 [2024-07-15 07:11:31.048242] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.122 00:07:22.122 real 0m0.060s 00:07:22.122 user 0m0.039s 00:07:22.122 sys 0m0.020s 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.122 ************************************ 00:07:22.122 END TEST dd_wrong_blocksize 00:07:22.122 ************************************ 00:07:22.122 07:11:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.381 ************************************ 00:07:22.381 START TEST dd_smaller_blocksize 00:07:22.381 ************************************ 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.381 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:22.381 [2024-07-15 07:11:31.158765] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:22.381 [2024-07-15 07:11:31.158848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64418 ] 00:07:22.381 [2024-07-15 07:11:31.296556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.640 [2024-07-15 07:11:31.383781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.640 [2024-07-15 07:11:31.420762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.898 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:22.898 [2024-07-15 07:11:31.705775] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:22.898 [2024-07-15 07:11:31.705826] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.898 [2024-07-15 07:11:31.778237] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.157 00:07:23.157 real 0m0.751s 00:07:23.157 user 0m0.341s 00:07:23.157 sys 0m0.303s 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.157 ************************************ 00:07:23.157 END TEST dd_smaller_blocksize 00:07:23.157 ************************************ 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.157 ************************************ 00:07:23.157 START TEST dd_invalid_count 00:07:23.157 ************************************ 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.157 07:11:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:23.157 [2024-07-15 07:11:31.995547] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.158 00:07:23.158 real 0m0.097s 00:07:23.158 user 0m0.074s 00:07:23.158 sys 0m0.021s 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:23.158 ************************************ 00:07:23.158 END TEST dd_invalid_count 00:07:23.158 ************************************ 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.158 ************************************ 00:07:23.158 START TEST dd_invalid_oflag 00:07:23.158 ************************************ 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.158 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:23.417 [2024-07-15 07:11:32.124572] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.417 00:07:23.417 real 0m0.071s 00:07:23.417 user 0m0.046s 00:07:23.417 sys 0m0.025s 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.417 ************************************ 00:07:23.417 END TEST dd_invalid_oflag 00:07:23.417 ************************************ 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.417 ************************************ 00:07:23.417 START TEST dd_invalid_iflag 00:07:23.417 ************************************ 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:23.417 [2024-07-15 07:11:32.251340] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.417 00:07:23.417 real 0m0.077s 00:07:23.417 user 0m0.050s 00:07:23.417 sys 0m0.026s 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:23.417 ************************************ 00:07:23.417 END TEST dd_invalid_iflag 00:07:23.417 ************************************ 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.417 ************************************ 00:07:23.417 START TEST dd_unknown_flag 00:07:23.417 ************************************ 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.417 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:23.676 [2024-07-15 07:11:32.401714] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:23.676 [2024-07-15 07:11:32.401872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64510 ] 00:07:23.676 [2024-07-15 07:11:32.543762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.676 [2024-07-15 07:11:32.616536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.935 [2024-07-15 07:11:32.651555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.935 [2024-07-15 07:11:32.673848] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:23.935 [2024-07-15 07:11:32.673914] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.935 [2024-07-15 07:11:32.673978] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:23.935 [2024-07-15 07:11:32.673994] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.935 [2024-07-15 07:11:32.674260] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:23.935 [2024-07-15 07:11:32.674281] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.935 [2024-07-15 07:11:32.674337] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:23.935 [2024-07-15 07:11:32.674350] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:23.935 [2024-07-15 07:11:32.747449] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.935 00:07:23.935 real 0m0.520s 00:07:23.935 user 0m0.288s 00:07:23.935 sys 0m0.139s 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:23.935 ************************************ 00:07:23.935 END TEST dd_unknown_flag 00:07:23.935 ************************************ 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.935 07:11:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.194 ************************************ 00:07:24.194 START TEST dd_invalid_json 00:07:24.194 ************************************ 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.194 07:11:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:24.194 [2024-07-15 07:11:32.946808] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:24.194 [2024-07-15 07:11:32.946918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64533 ] 00:07:24.194 [2024-07-15 07:11:33.086442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.465 [2024-07-15 07:11:33.147368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.465 [2024-07-15 07:11:33.147429] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:24.465 [2024-07-15 07:11:33.147447] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:24.465 [2024-07-15 07:11:33.147456] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.465 [2024-07-15 07:11:33.147532] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.465 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:24.465 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:24.465 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:24.465 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:24.465 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:24.465 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:24.465 00:07:24.465 real 0m0.350s 00:07:24.465 user 0m0.185s 00:07:24.465 sys 0m0.062s 00:07:24.466 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.466 07:11:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.466 ************************************ 00:07:24.466 END TEST dd_invalid_json 00:07:24.466 ************************************ 00:07:24.466 07:11:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:24.466 00:07:24.466 real 0m3.023s 00:07:24.466 user 0m1.472s 00:07:24.466 sys 0m1.194s 00:07:24.466 07:11:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.466 07:11:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.466 ************************************ 00:07:24.466 END TEST spdk_dd_negative 00:07:24.466 ************************************ 00:07:24.466 07:11:33 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:24.466 00:07:24.466 real 1m6.887s 00:07:24.466 user 0m44.215s 00:07:24.466 sys 0m26.762s 00:07:24.466 07:11:33 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.466 07:11:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:24.466 ************************************ 00:07:24.466 END TEST spdk_dd 00:07:24.466 ************************************ 00:07:24.466 07:11:33 -- common/autotest_common.sh@1142 -- # return 0 00:07:24.466 07:11:33 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:24.466 07:11:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:24.466 07:11:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:24.466 07:11:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.466 07:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:24.466 07:11:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:24.466 07:11:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:24.466 07:11:33 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:24.466 07:11:33 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:24.466 07:11:33 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:24.466 07:11:33 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:24.466 07:11:33 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.466 07:11:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.466 07:11:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.467 07:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:24.730 ************************************ 00:07:24.730 START TEST nvmf_tcp 00:07:24.730 ************************************ 00:07:24.730 07:11:33 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.730 * Looking for test storage... 00:07:24.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.730 07:11:33 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.730 07:11:33 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.730 07:11:33 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.730 07:11:33 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.730 07:11:33 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.730 07:11:33 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.730 07:11:33 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.730 07:11:33 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:24.730 07:11:33 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:24.731 07:11:33 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.731 07:11:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:24.731 07:11:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.731 07:11:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.731 07:11:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.731 07:11:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.731 ************************************ 00:07:24.731 START TEST nvmf_host_management 00:07:24.731 ************************************ 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.731 * Looking for test storage... 00:07:24.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:24.731 Cannot find device "nvmf_init_br" 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:24.731 Cannot find device "nvmf_tgt_br" 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:24.731 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:24.989 Cannot find device "nvmf_tgt_br2" 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:24.989 Cannot find device "nvmf_init_br" 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:24.989 Cannot find device "nvmf_tgt_br" 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:24.989 Cannot find device "nvmf_tgt_br2" 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:24.989 Cannot find device "nvmf_br" 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:24.989 Cannot find device "nvmf_init_if" 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:24.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:24.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:24.989 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:25.247 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:25.247 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:25.247 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:25.247 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:25.247 07:11:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:25.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:07:25.247 00:07:25.247 --- 10.0.0.2 ping statistics --- 00:07:25.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.247 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:25.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:25.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:25.247 00:07:25.247 --- 10.0.0.3 ping statistics --- 00:07:25.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.247 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:25.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:25.247 00:07:25.247 --- 10.0.0.1 ping statistics --- 00:07:25.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.247 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64788 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64788 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64788 ']' 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.247 07:11:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.247 [2024-07-15 07:11:34.089933] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:25.247 [2024-07-15 07:11:34.090018] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.506 [2024-07-15 07:11:34.227142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.506 [2024-07-15 07:11:34.300194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.506 [2024-07-15 07:11:34.300247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.506 [2024-07-15 07:11:34.300260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.506 [2024-07-15 07:11:34.300270] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.506 [2024-07-15 07:11:34.300279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.506 [2024-07-15 07:11:34.300438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.506 [2024-07-15 07:11:34.301150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.506 [2024-07-15 07:11:34.301291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.506 [2024-07-15 07:11:34.301455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.506 [2024-07-15 07:11:34.333693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 [2024-07-15 07:11:35.091597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 Malloc0 00:07:26.440 [2024-07-15 07:11:35.155663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64853 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64853 /var/tmp/bdevperf.sock 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64853 ']' 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:26.440 { 00:07:26.440 "params": { 00:07:26.440 "name": "Nvme$subsystem", 00:07:26.440 "trtype": "$TEST_TRANSPORT", 00:07:26.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.440 "adrfam": "ipv4", 00:07:26.440 "trsvcid": "$NVMF_PORT", 00:07:26.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.440 "hdgst": ${hdgst:-false}, 00:07:26.440 "ddgst": ${ddgst:-false} 00:07:26.440 }, 00:07:26.440 "method": "bdev_nvme_attach_controller" 00:07:26.440 } 00:07:26.440 EOF 00:07:26.440 )") 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:26.440 07:11:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:26.440 "params": { 00:07:26.440 "name": "Nvme0", 00:07:26.441 "trtype": "tcp", 00:07:26.441 "traddr": "10.0.0.2", 00:07:26.441 "adrfam": "ipv4", 00:07:26.441 "trsvcid": "4420", 00:07:26.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.441 "hdgst": false, 00:07:26.441 "ddgst": false 00:07:26.441 }, 00:07:26.441 "method": "bdev_nvme_attach_controller" 00:07:26.441 }' 00:07:26.441 [2024-07-15 07:11:35.260814] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:26.441 [2024-07-15 07:11:35.260910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64853 ] 00:07:26.699 [2024-07-15 07:11:35.432823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.699 [2024-07-15 07:11:35.518942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.699 [2024-07-15 07:11:35.564504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.957 Running I/O for 10 seconds... 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.523 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.523 [2024-07-15 07:11:36.384576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.523 [2024-07-15 07:11:36.384629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.523 [2024-07-15 07:11:36.384657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.523 [2024-07-15 07:11:36.384669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.384983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.384998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 07:11:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.524 [2024-07-15 07:11:36.385127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 07:11:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:27.524 [2024-07-15 07:11:36.385340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.524 [2024-07-15 07:11:36.385531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.524 [2024-07-15 07:11:36.385543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.385988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.385998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.386021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.386042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.386064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.525 [2024-07-15 07:11:36.386101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226cec0 is same with the state(5) to be set 00:07:27.525 [2024-07-15 07:11:36.386162] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x226cec0 was disconnected and freed. reset controller. 00:07:27.525 [2024-07-15 07:11:36.386269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.525 [2024-07-15 07:11:36.386287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.525 [2024-07-15 07:11:36.386309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.525 [2024-07-15 07:11:36.386330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.525 [2024-07-15 07:11:36.386351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.525 [2024-07-15 07:11:36.386360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264d50 is same with the state(5) to be set 00:07:27.525 [2024-07-15 07:11:36.387490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:27.525 task offset: 0 on job bdev=Nvme0n1 fails 00:07:27.525 00:07:27.525 Latency(us) 00:07:27.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.525 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:27.525 Job: Nvme0n1 ended in about 0.72 seconds with error 00:07:27.525 Verification LBA range: start 0x0 length 0x400 00:07:27.525 Nvme0n1 : 0.72 1428.60 89.29 89.29 0.00 40990.45 2249.08 43134.60 00:07:27.525 =================================================================================================================== 00:07:27.525 Total : 1428.60 89.29 89.29 0.00 40990.45 2249.08 43134.60 00:07:27.525 [2024-07-15 07:11:36.389531] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.525 [2024-07-15 07:11:36.389557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2264d50 (9): Bad file descriptor 00:07:27.525 [2024-07-15 07:11:36.398033] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64853 00:07:28.458 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64853) - No such process 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:28.458 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:28.458 { 00:07:28.458 "params": { 00:07:28.458 "name": "Nvme$subsystem", 00:07:28.458 "trtype": "$TEST_TRANSPORT", 00:07:28.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.458 "adrfam": "ipv4", 00:07:28.458 "trsvcid": "$NVMF_PORT", 00:07:28.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.459 "hdgst": ${hdgst:-false}, 00:07:28.459 "ddgst": ${ddgst:-false} 00:07:28.459 }, 00:07:28.459 "method": "bdev_nvme_attach_controller" 00:07:28.459 } 00:07:28.459 EOF 00:07:28.459 )") 00:07:28.459 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:28.459 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:28.459 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:28.459 07:11:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:28.459 "params": { 00:07:28.459 "name": "Nvme0", 00:07:28.459 "trtype": "tcp", 00:07:28.459 "traddr": "10.0.0.2", 00:07:28.459 "adrfam": "ipv4", 00:07:28.459 "trsvcid": "4420", 00:07:28.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:28.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:28.459 "hdgst": false, 00:07:28.459 "ddgst": false 00:07:28.459 }, 00:07:28.459 "method": "bdev_nvme_attach_controller" 00:07:28.459 }' 00:07:28.716 [2024-07-15 07:11:37.440253] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:28.716 [2024-07-15 07:11:37.440507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64891 ] 00:07:28.716 [2024-07-15 07:11:37.576016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.716 [2024-07-15 07:11:37.640720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.974 [2024-07-15 07:11:37.683115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.974 Running I/O for 1 seconds... 00:07:29.968 00:07:29.968 Latency(us) 00:07:29.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.968 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.968 Verification LBA range: start 0x0 length 0x400 00:07:29.968 Nvme0n1 : 1.01 1515.67 94.73 0.00 0.00 41351.30 4200.26 39083.29 00:07:29.968 =================================================================================================================== 00:07:29.968 Total : 1515.67 94.73 0.00 0.00 41351.30 4200.26 39083.29 00:07:30.227 07:11:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:30.227 07:11:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:30.227 07:11:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:30.227 07:11:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:30.227 07:11:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:30.227 07:11:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.227 07:11:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.227 rmmod nvme_tcp 00:07:30.227 rmmod nvme_fabrics 00:07:30.227 rmmod nvme_keyring 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64788 ']' 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64788 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 64788 ']' 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 64788 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64788 00:07:30.227 killing process with pid 64788 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64788' 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 64788 00:07:30.227 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 64788 00:07:30.485 [2024-07-15 07:11:39.262144] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:30.485 00:07:30.485 real 0m5.791s 00:07:30.485 user 0m22.654s 00:07:30.485 sys 0m1.354s 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.485 ************************************ 00:07:30.485 END TEST nvmf_host_management 00:07:30.485 ************************************ 00:07:30.485 07:11:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.485 07:11:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:30.485 07:11:39 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.485 07:11:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.485 07:11:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.485 07:11:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.485 ************************************ 00:07:30.485 START TEST nvmf_lvol 00:07:30.485 ************************************ 00:07:30.485 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.744 * Looking for test storage... 00:07:30.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.744 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:30.745 Cannot find device "nvmf_tgt_br" 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.745 Cannot find device "nvmf_tgt_br2" 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:30.745 Cannot find device "nvmf_tgt_br" 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:30.745 Cannot find device "nvmf_tgt_br2" 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:30.745 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:31.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:07:31.004 00:07:31.004 --- 10.0.0.2 ping statistics --- 00:07:31.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.004 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:31.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:07:31.004 00:07:31.004 --- 10.0.0.3 ping statistics --- 00:07:31.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.004 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:31.004 00:07:31.004 --- 10.0.0.1 ping statistics --- 00:07:31.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.004 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65102 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65102 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65102 ']' 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.004 07:11:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.004 [2024-07-15 07:11:39.888233] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:31.005 [2024-07-15 07:11:39.888323] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.263 [2024-07-15 07:11:40.031174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.263 [2024-07-15 07:11:40.104439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.263 [2024-07-15 07:11:40.104711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.263 [2024-07-15 07:11:40.104871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.263 [2024-07-15 07:11:40.105124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.263 [2024-07-15 07:11:40.105169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.263 [2024-07-15 07:11:40.105405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.263 [2024-07-15 07:11:40.105935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.263 [2024-07-15 07:11:40.105969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.263 [2024-07-15 07:11:40.139731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.263 07:11:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.263 07:11:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:31.263 07:11:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.263 07:11:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.263 07:11:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.521 07:11:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.521 07:11:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.779 [2024-07-15 07:11:40.496245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.779 07:11:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.038 07:11:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:32.038 07:11:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.297 07:11:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:32.297 07:11:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:32.555 07:11:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:32.813 07:11:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3f50d95b-4854-4d94-b4ce-3ea66e190ed3 00:07:32.813 07:11:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3f50d95b-4854-4d94-b4ce-3ea66e190ed3 lvol 20 00:07:33.073 07:11:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6180d9d1-ddab-44ac-8795-9a29423b5d50 00:07:33.073 07:11:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.332 07:11:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6180d9d1-ddab-44ac-8795-9a29423b5d50 00:07:33.591 07:11:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.850 [2024-07-15 07:11:42.667434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.850 07:11:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.108 07:11:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65170 00:07:34.109 07:11:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:34.109 07:11:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:35.043 07:11:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6180d9d1-ddab-44ac-8795-9a29423b5d50 MY_SNAPSHOT 00:07:35.610 07:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6ab9e951-65c3-418b-97ac-ba9076a74053 00:07:35.610 07:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6180d9d1-ddab-44ac-8795-9a29423b5d50 30 00:07:35.870 07:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 6ab9e951-65c3-418b-97ac-ba9076a74053 MY_CLONE 00:07:36.127 07:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=805f0034-2470-405b-a889-f7e08f542a11 00:07:36.127 07:11:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 805f0034-2470-405b-a889-f7e08f542a11 00:07:36.693 07:11:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65170 00:07:44.792 Initializing NVMe Controllers 00:07:44.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:44.792 Controller IO queue size 128, less than required. 00:07:44.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:44.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:44.792 Initialization complete. Launching workers. 00:07:44.792 ======================================================== 00:07:44.792 Latency(us) 00:07:44.792 Device Information : IOPS MiB/s Average min max 00:07:44.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10414.90 40.68 12292.05 2178.58 66625.18 00:07:44.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10456.00 40.84 12250.00 577.07 79773.18 00:07:44.792 ======================================================== 00:07:44.792 Total : 20870.90 81.53 12270.98 577.07 79773.18 00:07:44.792 00:07:44.792 07:11:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.792 07:11:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6180d9d1-ddab-44ac-8795-9a29423b5d50 00:07:45.049 07:11:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f50d95b-4854-4d94-b4ce-3ea66e190ed3 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.306 rmmod nvme_tcp 00:07:45.306 rmmod nvme_fabrics 00:07:45.306 rmmod nvme_keyring 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65102 ']' 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65102 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65102 ']' 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65102 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65102 00:07:45.306 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65102' 00:07:45.563 killing process with pid 65102 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65102 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65102 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:45.563 ************************************ 00:07:45.563 END TEST nvmf_lvol 00:07:45.563 ************************************ 00:07:45.563 00:07:45.563 real 0m15.102s 00:07:45.563 user 1m3.789s 00:07:45.563 sys 0m4.123s 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.563 07:11:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.822 07:11:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.822 07:11:54 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.822 07:11:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.822 07:11:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.822 07:11:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.822 ************************************ 00:07:45.822 START TEST nvmf_lvs_grow 00:07:45.822 ************************************ 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.822 * Looking for test storage... 00:07:45.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:45.822 Cannot find device "nvmf_tgt_br" 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.822 Cannot find device "nvmf_tgt_br2" 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:45.822 Cannot find device "nvmf_tgt_br" 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:45.822 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:45.822 Cannot find device "nvmf_tgt_br2" 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.823 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:46.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:07:46.081 00:07:46.081 --- 10.0.0.2 ping statistics --- 00:07:46.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.081 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:46.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:07:46.081 00:07:46.081 --- 10.0.0.3 ping statistics --- 00:07:46.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.081 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:46.081 00:07:46.081 --- 10.0.0.1 ping statistics --- 00:07:46.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.081 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.081 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65500 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65500 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65500 ']' 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.082 07:11:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.082 [2024-07-15 07:11:55.031322] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:46.082 [2024-07-15 07:11:55.031422] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.339 [2024-07-15 07:11:55.168194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.339 [2024-07-15 07:11:55.229319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.339 [2024-07-15 07:11:55.229375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.339 [2024-07-15 07:11:55.229387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.339 [2024-07-15 07:11:55.229395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.340 [2024-07-15 07:11:55.229402] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.340 [2024-07-15 07:11:55.229428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.340 [2024-07-15 07:11:55.259160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.598 07:11:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.598 07:11:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:46.598 07:11:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.598 07:11:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.598 07:11:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.598 07:11:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.598 07:11:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:46.858 [2024-07-15 07:11:55.633180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.858 ************************************ 00:07:46.858 START TEST lvs_grow_clean 00:07:46.858 ************************************ 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:46.858 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.116 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:47.116 07:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:47.382 07:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:07:47.382 07:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:47.382 07:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:07:47.951 07:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:47.951 07:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:47.951 07:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 lvol 150 00:07:48.209 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=046f1243-0c00-447a-8a84-7b001d43083c 00:07:48.209 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.209 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:48.468 [2024-07-15 07:11:57.353976] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:48.468 [2024-07-15 07:11:57.354062] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:48.468 true 00:07:48.468 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:07:48.468 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:48.726 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:48.726 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.984 07:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 046f1243-0c00-447a-8a84-7b001d43083c 00:07:49.242 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.499 [2024-07-15 07:11:58.438557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.760 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65585 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65585 /var/tmp/bdevperf.sock 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65585 ']' 00:07:50.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.019 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.020 07:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 [2024-07-15 07:11:58.846372] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:07:50.020 [2024-07-15 07:11:58.846485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65585 ] 00:07:50.280 [2024-07-15 07:11:58.989407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.280 [2024-07-15 07:11:59.062086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.280 [2024-07-15 07:11:59.097047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.280 07:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.280 07:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:50.280 07:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:50.850 Nvme0n1 00:07:50.850 07:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:50.850 [ 00:07:50.850 { 00:07:50.850 "name": "Nvme0n1", 00:07:50.850 "aliases": [ 00:07:50.850 "046f1243-0c00-447a-8a84-7b001d43083c" 00:07:50.850 ], 00:07:50.850 "product_name": "NVMe disk", 00:07:50.850 "block_size": 4096, 00:07:50.850 "num_blocks": 38912, 00:07:50.850 "uuid": "046f1243-0c00-447a-8a84-7b001d43083c", 00:07:50.850 "assigned_rate_limits": { 00:07:50.850 "rw_ios_per_sec": 0, 00:07:50.850 "rw_mbytes_per_sec": 0, 00:07:50.850 "r_mbytes_per_sec": 0, 00:07:50.850 "w_mbytes_per_sec": 0 00:07:50.850 }, 00:07:50.850 "claimed": false, 00:07:50.850 "zoned": false, 00:07:50.850 "supported_io_types": { 00:07:50.850 "read": true, 00:07:50.850 "write": true, 00:07:50.850 "unmap": true, 00:07:50.850 "flush": true, 00:07:50.850 "reset": true, 00:07:50.850 "nvme_admin": true, 00:07:50.850 "nvme_io": true, 00:07:50.850 "nvme_io_md": false, 00:07:50.850 "write_zeroes": true, 00:07:50.850 "zcopy": false, 00:07:50.850 "get_zone_info": false, 00:07:50.850 "zone_management": false, 00:07:50.850 "zone_append": false, 00:07:50.850 "compare": true, 00:07:50.850 "compare_and_write": true, 00:07:50.850 "abort": true, 00:07:50.850 "seek_hole": false, 00:07:50.850 "seek_data": false, 00:07:50.850 "copy": true, 00:07:50.850 "nvme_iov_md": false 00:07:50.850 }, 00:07:50.850 "memory_domains": [ 00:07:50.850 { 00:07:50.850 "dma_device_id": "system", 00:07:50.850 "dma_device_type": 1 00:07:50.850 } 00:07:50.850 ], 00:07:50.850 "driver_specific": { 00:07:50.850 "nvme": [ 00:07:50.850 { 00:07:50.850 "trid": { 00:07:50.850 "trtype": "TCP", 00:07:50.850 "adrfam": "IPv4", 00:07:50.850 "traddr": "10.0.0.2", 00:07:50.850 "trsvcid": "4420", 00:07:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:50.850 }, 00:07:50.850 "ctrlr_data": { 00:07:50.850 "cntlid": 1, 00:07:50.850 "vendor_id": "0x8086", 00:07:50.850 "model_number": "SPDK bdev Controller", 00:07:50.850 "serial_number": "SPDK0", 00:07:50.850 "firmware_revision": "24.09", 00:07:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.850 "oacs": { 00:07:50.850 "security": 0, 00:07:50.850 "format": 0, 00:07:50.850 "firmware": 0, 00:07:50.850 "ns_manage": 0 00:07:50.850 }, 00:07:50.850 "multi_ctrlr": true, 00:07:50.850 "ana_reporting": false 00:07:50.850 }, 00:07:50.850 "vs": { 00:07:50.850 "nvme_version": "1.3" 00:07:50.850 }, 00:07:50.850 "ns_data": { 00:07:50.850 "id": 1, 00:07:50.850 "can_share": true 00:07:50.850 } 00:07:50.850 } 00:07:50.850 ], 00:07:50.850 "mp_policy": "active_passive" 00:07:50.850 } 00:07:50.850 } 00:07:50.850 ] 00:07:50.850 07:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65597 00:07:50.850 07:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.850 07:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:51.108 Running I/O for 10 seconds... 00:07:52.040 Latency(us) 00:07:52.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.040 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:52.040 =================================================================================================================== 00:07:52.040 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:52.040 00:07:52.972 07:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:07:52.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.972 Nvme0n1 : 2.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:07:52.972 =================================================================================================================== 00:07:52.972 Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:07:52.972 00:07:53.230 true 00:07:53.230 07:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:07:53.230 07:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:53.488 07:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:53.488 07:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:53.488 07:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65597 00:07:54.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.066 Nvme0n1 : 3.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:07:54.066 =================================================================================================================== 00:07:54.066 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:07:54.066 00:07:55.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.023 Nvme0n1 : 4.00 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:07:55.023 =================================================================================================================== 00:07:55.023 Total : 7461.25 29.15 0.00 0.00 0.00 0.00 0.00 00:07:55.023 00:07:55.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.957 Nvme0n1 : 5.00 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:07:55.957 =================================================================================================================== 00:07:55.957 Total : 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:07:55.957 00:07:57.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.326 Nvme0n1 : 6.00 7387.17 28.86 0.00 0.00 0.00 0.00 0.00 00:07:57.326 =================================================================================================================== 00:07:57.326 Total : 7387.17 28.86 0.00 0.00 0.00 0.00 0.00 00:07:57.326 00:07:58.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.262 Nvme0n1 : 7.00 7347.86 28.70 0.00 0.00 0.00 0.00 0.00 00:07:58.262 =================================================================================================================== 00:07:58.262 Total : 7347.86 28.70 0.00 0.00 0.00 0.00 0.00 00:07:58.262 00:07:59.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.194 Nvme0n1 : 8.00 7310.50 28.56 0.00 0.00 0.00 0.00 0.00 00:07:59.194 =================================================================================================================== 00:07:59.194 Total : 7310.50 28.56 0.00 0.00 0.00 0.00 0.00 00:07:59.194 00:08:00.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.127 Nvme0n1 : 9.00 7246.11 28.31 0.00 0.00 0.00 0.00 0.00 00:08:00.127 =================================================================================================================== 00:08:00.127 Total : 7246.11 28.31 0.00 0.00 0.00 0.00 0.00 00:08:00.127 00:08:01.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.063 Nvme0n1 : 10.00 7232.70 28.25 0.00 0.00 0.00 0.00 0.00 00:08:01.063 =================================================================================================================== 00:08:01.063 Total : 7232.70 28.25 0.00 0.00 0.00 0.00 0.00 00:08:01.063 00:08:01.063 00:08:01.063 Latency(us) 00:08:01.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.063 Nvme0n1 : 10.01 7238.68 28.28 0.00 0.00 17677.90 9234.62 51237.24 00:08:01.063 =================================================================================================================== 00:08:01.063 Total : 7238.68 28.28 0.00 0.00 17677.90 9234.62 51237.24 00:08:01.063 0 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65585 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65585 ']' 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65585 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65585 00:08:01.063 killing process with pid 65585 00:08:01.063 Received shutdown signal, test time was about 10.000000 seconds 00:08:01.063 00:08:01.063 Latency(us) 00:08:01.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.063 =================================================================================================================== 00:08:01.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65585' 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65585 00:08:01.063 07:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65585 00:08:01.321 07:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.582 07:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.840 07:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:08:01.840 07:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:02.098 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:02.098 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:02.098 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.355 [2024-07-15 07:12:11.257736] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:02.355 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:08:02.614 request: 00:08:02.614 { 00:08:02.614 "uuid": "1fcd63a9-a159-4c5b-90a4-4338bf94bed7", 00:08:02.614 "method": "bdev_lvol_get_lvstores", 00:08:02.614 "req_id": 1 00:08:02.614 } 00:08:02.614 Got JSON-RPC error response 00:08:02.614 response: 00:08:02.614 { 00:08:02.614 "code": -19, 00:08:02.614 "message": "No such device" 00:08:02.614 } 00:08:02.614 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:02.614 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.614 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.614 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.614 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.872 aio_bdev 00:08:02.872 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 046f1243-0c00-447a-8a84-7b001d43083c 00:08:02.872 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=046f1243-0c00-447a-8a84-7b001d43083c 00:08:02.872 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:02.872 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:02.872 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:02.872 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:02.872 07:12:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:03.438 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 046f1243-0c00-447a-8a84-7b001d43083c -t 2000 00:08:03.438 [ 00:08:03.438 { 00:08:03.438 "name": "046f1243-0c00-447a-8a84-7b001d43083c", 00:08:03.438 "aliases": [ 00:08:03.438 "lvs/lvol" 00:08:03.438 ], 00:08:03.438 "product_name": "Logical Volume", 00:08:03.438 "block_size": 4096, 00:08:03.438 "num_blocks": 38912, 00:08:03.438 "uuid": "046f1243-0c00-447a-8a84-7b001d43083c", 00:08:03.438 "assigned_rate_limits": { 00:08:03.438 "rw_ios_per_sec": 0, 00:08:03.438 "rw_mbytes_per_sec": 0, 00:08:03.438 "r_mbytes_per_sec": 0, 00:08:03.438 "w_mbytes_per_sec": 0 00:08:03.438 }, 00:08:03.438 "claimed": false, 00:08:03.438 "zoned": false, 00:08:03.438 "supported_io_types": { 00:08:03.438 "read": true, 00:08:03.438 "write": true, 00:08:03.438 "unmap": true, 00:08:03.438 "flush": false, 00:08:03.438 "reset": true, 00:08:03.438 "nvme_admin": false, 00:08:03.438 "nvme_io": false, 00:08:03.438 "nvme_io_md": false, 00:08:03.438 "write_zeroes": true, 00:08:03.438 "zcopy": false, 00:08:03.438 "get_zone_info": false, 00:08:03.438 "zone_management": false, 00:08:03.439 "zone_append": false, 00:08:03.439 "compare": false, 00:08:03.439 "compare_and_write": false, 00:08:03.439 "abort": false, 00:08:03.439 "seek_hole": true, 00:08:03.439 "seek_data": true, 00:08:03.439 "copy": false, 00:08:03.439 "nvme_iov_md": false 00:08:03.439 }, 00:08:03.439 "driver_specific": { 00:08:03.439 "lvol": { 00:08:03.439 "lvol_store_uuid": "1fcd63a9-a159-4c5b-90a4-4338bf94bed7", 00:08:03.439 "base_bdev": "aio_bdev", 00:08:03.439 "thin_provision": false, 00:08:03.439 "num_allocated_clusters": 38, 00:08:03.439 "snapshot": false, 00:08:03.439 "clone": false, 00:08:03.439 "esnap_clone": false 00:08:03.439 } 00:08:03.439 } 00:08:03.439 } 00:08:03.439 ] 00:08:03.439 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:03.439 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:08:03.439 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:04.004 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:04.004 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:08:04.004 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:04.004 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:04.004 07:12:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 046f1243-0c00-447a-8a84-7b001d43083c 00:08:04.262 07:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1fcd63a9-a159-4c5b-90a4-4338bf94bed7 00:08:04.523 07:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.088 07:12:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.346 ************************************ 00:08:05.346 END TEST lvs_grow_clean 00:08:05.346 ************************************ 00:08:05.346 00:08:05.346 real 0m18.487s 00:08:05.346 user 0m17.338s 00:08:05.346 sys 0m2.512s 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.346 ************************************ 00:08:05.346 START TEST lvs_grow_dirty 00:08:05.346 ************************************ 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.346 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.605 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:05.605 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:06.171 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2023f601-6302-4800-b935-f2efc6a2d488 00:08:06.171 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:06.171 07:12:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:06.171 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:06.171 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:06.171 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2023f601-6302-4800-b935-f2efc6a2d488 lvol 150 00:08:06.737 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e0a9f2ca-8412-4394-8731-18c48b94eaad 00:08:06.737 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:06.737 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:06.737 [2024-07-15 07:12:15.629053] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:06.737 [2024-07-15 07:12:15.629188] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:06.737 true 00:08:06.737 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:06.737 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:07.303 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:07.303 07:12:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:07.562 07:12:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e0a9f2ca-8412-4394-8731-18c48b94eaad 00:08:07.820 07:12:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:08.078 [2024-07-15 07:12:16.801737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.078 07:12:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65854 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65854 /var/tmp/bdevperf.sock 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 65854 ']' 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.336 07:12:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.336 [2024-07-15 07:12:17.190258] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:08.336 [2024-07-15 07:12:17.190650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65854 ] 00:08:08.594 [2024-07-15 07:12:17.344723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.594 [2024-07-15 07:12:17.423039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.594 [2024-07-15 07:12:17.458373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.529 07:12:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.529 07:12:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:09.529 07:12:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:09.801 Nvme0n1 00:08:09.801 07:12:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.059 [ 00:08:10.059 { 00:08:10.059 "name": "Nvme0n1", 00:08:10.059 "aliases": [ 00:08:10.059 "e0a9f2ca-8412-4394-8731-18c48b94eaad" 00:08:10.059 ], 00:08:10.059 "product_name": "NVMe disk", 00:08:10.059 "block_size": 4096, 00:08:10.059 "num_blocks": 38912, 00:08:10.059 "uuid": "e0a9f2ca-8412-4394-8731-18c48b94eaad", 00:08:10.059 "assigned_rate_limits": { 00:08:10.059 "rw_ios_per_sec": 0, 00:08:10.059 "rw_mbytes_per_sec": 0, 00:08:10.059 "r_mbytes_per_sec": 0, 00:08:10.059 "w_mbytes_per_sec": 0 00:08:10.059 }, 00:08:10.059 "claimed": false, 00:08:10.059 "zoned": false, 00:08:10.059 "supported_io_types": { 00:08:10.059 "read": true, 00:08:10.059 "write": true, 00:08:10.059 "unmap": true, 00:08:10.059 "flush": true, 00:08:10.059 "reset": true, 00:08:10.059 "nvme_admin": true, 00:08:10.059 "nvme_io": true, 00:08:10.059 "nvme_io_md": false, 00:08:10.059 "write_zeroes": true, 00:08:10.059 "zcopy": false, 00:08:10.059 "get_zone_info": false, 00:08:10.059 "zone_management": false, 00:08:10.059 "zone_append": false, 00:08:10.059 "compare": true, 00:08:10.059 "compare_and_write": true, 00:08:10.059 "abort": true, 00:08:10.059 "seek_hole": false, 00:08:10.059 "seek_data": false, 00:08:10.059 "copy": true, 00:08:10.059 "nvme_iov_md": false 00:08:10.059 }, 00:08:10.059 "memory_domains": [ 00:08:10.059 { 00:08:10.059 "dma_device_id": "system", 00:08:10.059 "dma_device_type": 1 00:08:10.059 } 00:08:10.059 ], 00:08:10.059 "driver_specific": { 00:08:10.059 "nvme": [ 00:08:10.059 { 00:08:10.059 "trid": { 00:08:10.059 "trtype": "TCP", 00:08:10.059 "adrfam": "IPv4", 00:08:10.059 "traddr": "10.0.0.2", 00:08:10.059 "trsvcid": "4420", 00:08:10.059 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.059 }, 00:08:10.059 "ctrlr_data": { 00:08:10.059 "cntlid": 1, 00:08:10.059 "vendor_id": "0x8086", 00:08:10.059 "model_number": "SPDK bdev Controller", 00:08:10.059 "serial_number": "SPDK0", 00:08:10.059 "firmware_revision": "24.09", 00:08:10.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.059 "oacs": { 00:08:10.059 "security": 0, 00:08:10.059 "format": 0, 00:08:10.059 "firmware": 0, 00:08:10.059 "ns_manage": 0 00:08:10.059 }, 00:08:10.059 "multi_ctrlr": true, 00:08:10.059 "ana_reporting": false 00:08:10.059 }, 00:08:10.059 "vs": { 00:08:10.059 "nvme_version": "1.3" 00:08:10.059 }, 00:08:10.059 "ns_data": { 00:08:10.059 "id": 1, 00:08:10.059 "can_share": true 00:08:10.059 } 00:08:10.059 } 00:08:10.059 ], 00:08:10.059 "mp_policy": "active_passive" 00:08:10.059 } 00:08:10.059 } 00:08:10.059 ] 00:08:10.059 07:12:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65872 00:08:10.059 07:12:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.059 07:12:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.316 Running I/O for 10 seconds... 00:08:11.247 Latency(us) 00:08:11.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.247 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:11.247 =================================================================================================================== 00:08:11.247 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:11.247 00:08:12.179 07:12:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:12.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.179 Nvme0n1 : 2.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:12.179 =================================================================================================================== 00:08:12.179 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:12.179 00:08:12.437 true 00:08:12.437 07:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:12.437 07:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:12.694 07:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:12.694 07:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:12.694 07:12:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65872 00:08:13.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.260 Nvme0n1 : 3.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:13.260 =================================================================================================================== 00:08:13.260 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:13.260 00:08:14.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.193 Nvme0n1 : 4.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:14.193 =================================================================================================================== 00:08:14.193 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:14.193 00:08:15.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.224 Nvme0n1 : 5.00 7315.20 28.57 0.00 0.00 0.00 0.00 0.00 00:08:15.224 =================================================================================================================== 00:08:15.224 Total : 7315.20 28.57 0.00 0.00 0.00 0.00 0.00 00:08:15.224 00:08:16.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.158 Nvme0n1 : 6.00 7277.50 28.43 0.00 0.00 0.00 0.00 0.00 00:08:16.158 =================================================================================================================== 00:08:16.158 Total : 7277.50 28.43 0.00 0.00 0.00 0.00 0.00 00:08:16.158 00:08:17.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.093 Nvme0n1 : 7.00 7126.86 27.84 0.00 0.00 0.00 0.00 0.00 00:08:17.093 =================================================================================================================== 00:08:17.093 Total : 7126.86 27.84 0.00 0.00 0.00 0.00 0.00 00:08:17.093 00:08:18.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.468 Nvme0n1 : 8.00 7093.25 27.71 0.00 0.00 0.00 0.00 0.00 00:08:18.468 =================================================================================================================== 00:08:18.468 Total : 7093.25 27.71 0.00 0.00 0.00 0.00 0.00 00:08:18.468 00:08:19.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.403 Nvme0n1 : 9.00 7081.22 27.66 0.00 0.00 0.00 0.00 0.00 00:08:19.403 =================================================================================================================== 00:08:19.403 Total : 7081.22 27.66 0.00 0.00 0.00 0.00 0.00 00:08:19.403 00:08:20.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.336 Nvme0n1 : 10.00 7058.90 27.57 0.00 0.00 0.00 0.00 0.00 00:08:20.336 =================================================================================================================== 00:08:20.336 Total : 7058.90 27.57 0.00 0.00 0.00 0.00 0.00 00:08:20.336 00:08:20.336 00:08:20.336 Latency(us) 00:08:20.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.336 Nvme0n1 : 10.00 7068.26 27.61 0.00 0.00 18102.66 13464.67 153473.40 00:08:20.336 =================================================================================================================== 00:08:20.336 Total : 7068.26 27.61 0.00 0.00 18102.66 13464.67 153473.40 00:08:20.336 0 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65854 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 65854 ']' 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 65854 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65854 00:08:20.336 killing process with pid 65854 00:08:20.336 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.336 00:08:20.336 Latency(us) 00:08:20.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.336 =================================================================================================================== 00:08:20.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65854' 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 65854 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 65854 00:08:20.336 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.902 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.903 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:20.903 07:12:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65500 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65500 00:08:21.470 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65500 Killed "${NVMF_APP[@]}" "$@" 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66010 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66010 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66010 ']' 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.470 07:12:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.470 [2024-07-15 07:12:30.241303] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:21.470 [2024-07-15 07:12:30.241399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.470 [2024-07-15 07:12:30.386663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.728 [2024-07-15 07:12:30.460898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.728 [2024-07-15 07:12:30.460961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.728 [2024-07-15 07:12:30.460976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.728 [2024-07-15 07:12:30.460986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.728 [2024-07-15 07:12:30.460996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.728 [2024-07-15 07:12:30.461025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.728 [2024-07-15 07:12:30.495658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.663 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.663 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:22.663 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.663 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.663 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.663 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.663 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.663 [2024-07-15 07:12:31.568360] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:22.663 [2024-07-15 07:12:31.568664] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:22.663 [2024-07-15 07:12:31.568802] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e0a9f2ca-8412-4394-8731-18c48b94eaad 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e0a9f2ca-8412-4394-8731-18c48b94eaad 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:22.921 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.180 07:12:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e0a9f2ca-8412-4394-8731-18c48b94eaad -t 2000 00:08:23.180 [ 00:08:23.180 { 00:08:23.180 "name": "e0a9f2ca-8412-4394-8731-18c48b94eaad", 00:08:23.180 "aliases": [ 00:08:23.180 "lvs/lvol" 00:08:23.180 ], 00:08:23.180 "product_name": "Logical Volume", 00:08:23.180 "block_size": 4096, 00:08:23.180 "num_blocks": 38912, 00:08:23.180 "uuid": "e0a9f2ca-8412-4394-8731-18c48b94eaad", 00:08:23.180 "assigned_rate_limits": { 00:08:23.180 "rw_ios_per_sec": 0, 00:08:23.180 "rw_mbytes_per_sec": 0, 00:08:23.180 "r_mbytes_per_sec": 0, 00:08:23.180 "w_mbytes_per_sec": 0 00:08:23.180 }, 00:08:23.180 "claimed": false, 00:08:23.180 "zoned": false, 00:08:23.180 "supported_io_types": { 00:08:23.180 "read": true, 00:08:23.180 "write": true, 00:08:23.180 "unmap": true, 00:08:23.180 "flush": false, 00:08:23.180 "reset": true, 00:08:23.180 "nvme_admin": false, 00:08:23.180 "nvme_io": false, 00:08:23.180 "nvme_io_md": false, 00:08:23.180 "write_zeroes": true, 00:08:23.180 "zcopy": false, 00:08:23.180 "get_zone_info": false, 00:08:23.180 "zone_management": false, 00:08:23.180 "zone_append": false, 00:08:23.180 "compare": false, 00:08:23.180 "compare_and_write": false, 00:08:23.180 "abort": false, 00:08:23.180 "seek_hole": true, 00:08:23.180 "seek_data": true, 00:08:23.180 "copy": false, 00:08:23.180 "nvme_iov_md": false 00:08:23.180 }, 00:08:23.180 "driver_specific": { 00:08:23.180 "lvol": { 00:08:23.180 "lvol_store_uuid": "2023f601-6302-4800-b935-f2efc6a2d488", 00:08:23.180 "base_bdev": "aio_bdev", 00:08:23.180 "thin_provision": false, 00:08:23.180 "num_allocated_clusters": 38, 00:08:23.180 "snapshot": false, 00:08:23.180 "clone": false, 00:08:23.180 "esnap_clone": false 00:08:23.180 } 00:08:23.180 } 00:08:23.180 } 00:08:23.180 ] 00:08:23.437 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:23.437 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:23.437 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.695 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.695 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:23.695 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.954 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.954 07:12:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.212 [2024-07-15 07:12:32.974215] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:24.212 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:24.472 request: 00:08:24.472 { 00:08:24.472 "uuid": "2023f601-6302-4800-b935-f2efc6a2d488", 00:08:24.472 "method": "bdev_lvol_get_lvstores", 00:08:24.472 "req_id": 1 00:08:24.472 } 00:08:24.472 Got JSON-RPC error response 00:08:24.472 response: 00:08:24.472 { 00:08:24.472 "code": -19, 00:08:24.472 "message": "No such device" 00:08:24.472 } 00:08:24.472 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:24.472 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:24.472 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:24.472 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:24.472 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.730 aio_bdev 00:08:24.730 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e0a9f2ca-8412-4394-8731-18c48b94eaad 00:08:24.730 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e0a9f2ca-8412-4394-8731-18c48b94eaad 00:08:24.730 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:24.730 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:24.730 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:24.730 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:24.730 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.988 07:12:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e0a9f2ca-8412-4394-8731-18c48b94eaad -t 2000 00:08:25.246 [ 00:08:25.246 { 00:08:25.246 "name": "e0a9f2ca-8412-4394-8731-18c48b94eaad", 00:08:25.246 "aliases": [ 00:08:25.246 "lvs/lvol" 00:08:25.246 ], 00:08:25.246 "product_name": "Logical Volume", 00:08:25.246 "block_size": 4096, 00:08:25.246 "num_blocks": 38912, 00:08:25.246 "uuid": "e0a9f2ca-8412-4394-8731-18c48b94eaad", 00:08:25.246 "assigned_rate_limits": { 00:08:25.246 "rw_ios_per_sec": 0, 00:08:25.246 "rw_mbytes_per_sec": 0, 00:08:25.246 "r_mbytes_per_sec": 0, 00:08:25.246 "w_mbytes_per_sec": 0 00:08:25.246 }, 00:08:25.246 "claimed": false, 00:08:25.246 "zoned": false, 00:08:25.246 "supported_io_types": { 00:08:25.246 "read": true, 00:08:25.246 "write": true, 00:08:25.246 "unmap": true, 00:08:25.246 "flush": false, 00:08:25.246 "reset": true, 00:08:25.246 "nvme_admin": false, 00:08:25.246 "nvme_io": false, 00:08:25.246 "nvme_io_md": false, 00:08:25.247 "write_zeroes": true, 00:08:25.247 "zcopy": false, 00:08:25.247 "get_zone_info": false, 00:08:25.247 "zone_management": false, 00:08:25.247 "zone_append": false, 00:08:25.247 "compare": false, 00:08:25.247 "compare_and_write": false, 00:08:25.247 "abort": false, 00:08:25.247 "seek_hole": true, 00:08:25.247 "seek_data": true, 00:08:25.247 "copy": false, 00:08:25.247 "nvme_iov_md": false 00:08:25.247 }, 00:08:25.247 "driver_specific": { 00:08:25.247 "lvol": { 00:08:25.247 "lvol_store_uuid": "2023f601-6302-4800-b935-f2efc6a2d488", 00:08:25.247 "base_bdev": "aio_bdev", 00:08:25.247 "thin_provision": false, 00:08:25.247 "num_allocated_clusters": 38, 00:08:25.247 "snapshot": false, 00:08:25.247 "clone": false, 00:08:25.247 "esnap_clone": false 00:08:25.247 } 00:08:25.247 } 00:08:25.247 } 00:08:25.247 ] 00:08:25.247 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:25.247 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:25.247 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:25.505 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.505 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:25.505 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:26.083 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:26.083 07:12:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e0a9f2ca-8412-4394-8731-18c48b94eaad 00:08:26.345 07:12:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2023f601-6302-4800-b935-f2efc6a2d488 00:08:26.602 07:12:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.859 07:12:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:27.425 ************************************ 00:08:27.425 END TEST lvs_grow_dirty 00:08:27.425 ************************************ 00:08:27.425 00:08:27.425 real 0m21.870s 00:08:27.425 user 0m44.961s 00:08:27.425 sys 0m8.163s 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:27.425 nvmf_trace.0 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.425 rmmod nvme_tcp 00:08:27.425 rmmod nvme_fabrics 00:08:27.425 rmmod nvme_keyring 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66010 ']' 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66010 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66010 ']' 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66010 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.425 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66010 00:08:27.682 killing process with pid 66010 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66010' 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66010 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66010 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:27.682 ************************************ 00:08:27.682 END TEST nvmf_lvs_grow 00:08:27.682 ************************************ 00:08:27.682 00:08:27.682 real 0m42.059s 00:08:27.682 user 1m9.142s 00:08:27.682 sys 0m11.269s 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.682 07:12:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.682 07:12:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:27.682 07:12:36 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:27.682 07:12:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.682 07:12:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.682 07:12:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.939 ************************************ 00:08:27.939 START TEST nvmf_bdev_io_wait 00:08:27.939 ************************************ 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:27.939 * Looking for test storage... 00:08:27.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.939 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:27.940 Cannot find device "nvmf_tgt_br" 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.940 Cannot find device "nvmf_tgt_br2" 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:27.940 Cannot find device "nvmf_tgt_br" 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:27.940 Cannot find device "nvmf_tgt_br2" 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.940 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.219 07:12:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:28.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:28.219 00:08:28.219 --- 10.0.0.2 ping statistics --- 00:08:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.219 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:28.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:28.219 00:08:28.219 --- 10.0.0.3 ping statistics --- 00:08:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.219 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:28.219 00:08:28.219 --- 10.0.0.1 ping statistics --- 00:08:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.219 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66326 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66326 00:08:28.219 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:28.220 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66326 ']' 00:08:28.220 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.220 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.220 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.220 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.220 07:12:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.220 [2024-07-15 07:12:37.159259] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:28.220 [2024-07-15 07:12:37.159350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.478 [2024-07-15 07:12:37.298133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.478 [2024-07-15 07:12:37.375627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.478 [2024-07-15 07:12:37.375919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.478 [2024-07-15 07:12:37.376185] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.478 [2024-07-15 07:12:37.376353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.478 [2024-07-15 07:12:37.376567] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.478 [2024-07-15 07:12:37.376765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.478 [2024-07-15 07:12:37.376918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.478 [2024-07-15 07:12:37.377686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.478 [2024-07-15 07:12:37.377743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 [2024-07-15 07:12:38.273556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 [2024-07-15 07:12:38.288332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 Malloc0 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.411 [2024-07-15 07:12:38.349413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66361 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66363 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:29.411 { 00:08:29.411 "params": { 00:08:29.411 "name": "Nvme$subsystem", 00:08:29.411 "trtype": "$TEST_TRANSPORT", 00:08:29.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.411 "adrfam": "ipv4", 00:08:29.411 "trsvcid": "$NVMF_PORT", 00:08:29.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.411 "hdgst": ${hdgst:-false}, 00:08:29.411 "ddgst": ${ddgst:-false} 00:08:29.411 }, 00:08:29.411 "method": "bdev_nvme_attach_controller" 00:08:29.411 } 00:08:29.411 EOF 00:08:29.411 )") 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:29.411 { 00:08:29.411 "params": { 00:08:29.411 "name": "Nvme$subsystem", 00:08:29.411 "trtype": "$TEST_TRANSPORT", 00:08:29.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.411 "adrfam": "ipv4", 00:08:29.411 "trsvcid": "$NVMF_PORT", 00:08:29.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.411 "hdgst": ${hdgst:-false}, 00:08:29.411 "ddgst": ${ddgst:-false} 00:08:29.411 }, 00:08:29.411 "method": "bdev_nvme_attach_controller" 00:08:29.411 } 00:08:29.411 EOF 00:08:29.411 )") 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66365 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:29.411 { 00:08:29.411 "params": { 00:08:29.411 "name": "Nvme$subsystem", 00:08:29.411 "trtype": "$TEST_TRANSPORT", 00:08:29.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.411 "adrfam": "ipv4", 00:08:29.411 "trsvcid": "$NVMF_PORT", 00:08:29.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.411 "hdgst": ${hdgst:-false}, 00:08:29.411 "ddgst": ${ddgst:-false} 00:08:29.411 }, 00:08:29.411 "method": "bdev_nvme_attach_controller" 00:08:29.411 } 00:08:29.411 EOF 00:08:29.411 )") 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66371 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:29.411 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:29.669 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:29.669 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:29.669 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:29.670 "params": { 00:08:29.670 "name": "Nvme1", 00:08:29.670 "trtype": "tcp", 00:08:29.670 "traddr": "10.0.0.2", 00:08:29.670 "adrfam": "ipv4", 00:08:29.670 "trsvcid": "4420", 00:08:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.670 "hdgst": false, 00:08:29.670 "ddgst": false 00:08:29.670 }, 00:08:29.670 "method": "bdev_nvme_attach_controller" 00:08:29.670 }' 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:29.670 { 00:08:29.670 "params": { 00:08:29.670 "name": "Nvme$subsystem", 00:08:29.670 "trtype": "$TEST_TRANSPORT", 00:08:29.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.670 "adrfam": "ipv4", 00:08:29.670 "trsvcid": "$NVMF_PORT", 00:08:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.670 "hdgst": ${hdgst:-false}, 00:08:29.670 "ddgst": ${ddgst:-false} 00:08:29.670 }, 00:08:29.670 "method": "bdev_nvme_attach_controller" 00:08:29.670 } 00:08:29.670 EOF 00:08:29.670 )") 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:29.670 "params": { 00:08:29.670 "name": "Nvme1", 00:08:29.670 "trtype": "tcp", 00:08:29.670 "traddr": "10.0.0.2", 00:08:29.670 "adrfam": "ipv4", 00:08:29.670 "trsvcid": "4420", 00:08:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.670 "hdgst": false, 00:08:29.670 "ddgst": false 00:08:29.670 }, 00:08:29.670 "method": "bdev_nvme_attach_controller" 00:08:29.670 }' 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:29.670 "params": { 00:08:29.670 "name": "Nvme1", 00:08:29.670 "trtype": "tcp", 00:08:29.670 "traddr": "10.0.0.2", 00:08:29.670 "adrfam": "ipv4", 00:08:29.670 "trsvcid": "4420", 00:08:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.670 "hdgst": false, 00:08:29.670 "ddgst": false 00:08:29.670 }, 00:08:29.670 "method": "bdev_nvme_attach_controller" 00:08:29.670 }' 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:29.670 "params": { 00:08:29.670 "name": "Nvme1", 00:08:29.670 "trtype": "tcp", 00:08:29.670 "traddr": "10.0.0.2", 00:08:29.670 "adrfam": "ipv4", 00:08:29.670 "trsvcid": "4420", 00:08:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.670 "hdgst": false, 00:08:29.670 "ddgst": false 00:08:29.670 }, 00:08:29.670 "method": "bdev_nvme_attach_controller" 00:08:29.670 }' 00:08:29.670 [2024-07-15 07:12:38.424096] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:29.670 [2024-07-15 07:12:38.424455] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:29.670 [2024-07-15 07:12:38.424866] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:29.670 [2024-07-15 07:12:38.425059] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:29.670 07:12:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66361 00:08:29.670 [2024-07-15 07:12:38.448696] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:29.670 [2024-07-15 07:12:38.448975] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:29.670 [2024-07-15 07:12:38.454511] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:29.670 [2024-07-15 07:12:38.454852] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:29.670 [2024-07-15 07:12:38.598239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.928 [2024-07-15 07:12:38.640245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.928 [2024-07-15 07:12:38.654860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:29.928 [2024-07-15 07:12:38.685847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.928 [2024-07-15 07:12:38.687772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.928 [2024-07-15 07:12:38.696197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:29.928 [2024-07-15 07:12:38.724759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.928 [2024-07-15 07:12:38.730215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.928 [2024-07-15 07:12:38.733656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:29.928 [2024-07-15 07:12:38.763652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.928 Running I/O for 1 seconds... 00:08:29.928 [2024-07-15 07:12:38.796157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:29.928 Running I/O for 1 seconds... 00:08:29.928 [2024-07-15 07:12:38.830261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.928 Running I/O for 1 seconds... 00:08:30.186 Running I/O for 1 seconds... 00:08:31.119 00:08:31.119 Latency(us) 00:08:31.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.119 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:31.119 Nvme1n1 : 1.01 10355.51 40.45 0.00 0.00 12307.91 7060.01 18826.71 00:08:31.119 =================================================================================================================== 00:08:31.119 Total : 10355.51 40.45 0.00 0.00 12307.91 7060.01 18826.71 00:08:31.119 00:08:31.119 Latency(us) 00:08:31.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.119 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:31.119 Nvme1n1 : 1.01 7367.71 28.78 0.00 0.00 17262.55 10128.29 25737.77 00:08:31.119 =================================================================================================================== 00:08:31.119 Total : 7367.71 28.78 0.00 0.00 17262.55 10128.29 25737.77 00:08:31.119 00:08:31.119 Latency(us) 00:08:31.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.119 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:31.119 Nvme1n1 : 1.00 156157.95 609.99 0.00 0.00 816.76 363.05 1251.14 00:08:31.119 =================================================================================================================== 00:08:31.119 Total : 156157.95 609.99 0.00 0.00 816.76 363.05 1251.14 00:08:31.119 00:08:31.119 Latency(us) 00:08:31.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.119 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:31.119 Nvme1n1 : 1.01 7592.85 29.66 0.00 0.00 16775.69 8519.68 35746.91 00:08:31.119 =================================================================================================================== 00:08:31.119 Total : 7592.85 29.66 0.00 0.00 16775.69 8519.68 35746.91 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66363 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66365 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66371 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.477 rmmod nvme_tcp 00:08:31.477 rmmod nvme_fabrics 00:08:31.477 rmmod nvme_keyring 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66326 ']' 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66326 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66326 ']' 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66326 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66326 00:08:31.477 killing process with pid 66326 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66326' 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66326 00:08:31.477 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66326 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.738 ************************************ 00:08:31.738 END TEST nvmf_bdev_io_wait 00:08:31.738 ************************************ 00:08:31.738 00:08:31.738 real 0m3.804s 00:08:31.738 user 0m16.344s 00:08:31.738 sys 0m2.125s 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.738 07:12:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.738 07:12:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:31.738 07:12:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.738 07:12:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.738 07:12:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.738 07:12:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.738 ************************************ 00:08:31.738 START TEST nvmf_queue_depth 00:08:31.738 ************************************ 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.738 * Looking for test storage... 00:08:31.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.738 07:12:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.739 Cannot find device "nvmf_tgt_br" 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.739 Cannot find device "nvmf_tgt_br2" 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.739 Cannot find device "nvmf_tgt_br" 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.739 Cannot find device "nvmf_tgt_br2" 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:31.739 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:08:31.998 00:08:31.998 --- 10.0.0.2 ping statistics --- 00:08:31.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.998 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:31.998 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.998 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.998 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:31.998 00:08:31.998 --- 10.0.0.3 ping statistics --- 00:08:31.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.999 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:31.999 00:08:31.999 --- 10.0.0.1 ping statistics --- 00:08:31.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.999 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.999 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66604 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66604 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66604 ']' 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.257 07:12:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.257 [2024-07-15 07:12:41.026876] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:32.257 [2024-07-15 07:12:41.026989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.257 [2024-07-15 07:12:41.173328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.516 [2024-07-15 07:12:41.231042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.516 [2024-07-15 07:12:41.231114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.516 [2024-07-15 07:12:41.231135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.516 [2024-07-15 07:12:41.231149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.516 [2024-07-15 07:12:41.231160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.516 [2024-07-15 07:12:41.231200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.516 [2024-07-15 07:12:41.260233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.083 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.083 [2024-07-15 07:12:41.982069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.084 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.084 07:12:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:33.084 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.084 07:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 Malloc0 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.084 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 [2024-07-15 07:12:42.034189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.342 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66637 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66637 /var/tmp/bdevperf.sock 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66637 ']' 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.343 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.343 [2024-07-15 07:12:42.082783] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:33.343 [2024-07-15 07:12:42.082864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66637 ] 00:08:33.343 [2024-07-15 07:12:42.216326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.343 [2024-07-15 07:12:42.275152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.602 [2024-07-15 07:12:42.304372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.602 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.602 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:33.602 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:33.602 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.602 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.602 NVMe0n1 00:08:33.602 07:12:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.602 07:12:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.859 Running I/O for 10 seconds... 00:08:43.943 00:08:43.943 Latency(us) 00:08:43.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.943 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:43.943 Verification LBA range: start 0x0 length 0x4000 00:08:43.943 NVMe0n1 : 10.11 7304.49 28.53 0.00 0.00 139493.41 27286.81 104857.60 00:08:43.943 =================================================================================================================== 00:08:43.943 Total : 7304.49 28.53 0.00 0.00 139493.41 27286.81 104857.60 00:08:43.943 0 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66637 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66637 ']' 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66637 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66637 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.943 killing process with pid 66637 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66637' 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66637 00:08:43.943 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.943 00:08:43.943 Latency(us) 00:08:43.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.943 =================================================================================================================== 00:08:43.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66637 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.943 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.201 rmmod nvme_tcp 00:08:44.201 rmmod nvme_fabrics 00:08:44.201 rmmod nvme_keyring 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66604 ']' 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66604 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66604 ']' 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66604 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.201 07:12:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66604 00:08:44.201 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:44.201 killing process with pid 66604 00:08:44.201 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:44.201 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66604' 00:08:44.201 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66604 00:08:44.201 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66604 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:44.458 00:08:44.458 real 0m12.731s 00:08:44.458 user 0m21.715s 00:08:44.458 sys 0m2.114s 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.458 07:12:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 ************************************ 00:08:44.458 END TEST nvmf_queue_depth 00:08:44.458 ************************************ 00:08:44.458 07:12:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.458 07:12:53 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.458 07:12:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.458 07:12:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.458 07:12:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.458 ************************************ 00:08:44.458 START TEST nvmf_target_multipath 00:08:44.458 ************************************ 00:08:44.458 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.458 * Looking for test storage... 00:08:44.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.458 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.458 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:44.459 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:44.459 Cannot find device "nvmf_tgt_br" 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.716 Cannot find device "nvmf_tgt_br2" 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:44.716 Cannot find device "nvmf_tgt_br" 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:44.716 Cannot find device "nvmf_tgt_br2" 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:44.716 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:44.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:44.974 00:08:44.974 --- 10.0.0.2 ping statistics --- 00:08:44.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.974 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:44.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:08:44.974 00:08:44.974 --- 10.0.0.3 ping statistics --- 00:08:44.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.974 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:08:44.974 00:08:44.974 --- 10.0.0.1 ping statistics --- 00:08:44.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.974 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.974 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66948 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66948 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 66948 ']' 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.975 07:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.975 [2024-07-15 07:12:53.829456] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:08:44.975 [2024-07-15 07:12:53.829573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.233 [2024-07-15 07:12:53.975145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.233 [2024-07-15 07:12:54.044903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.233 [2024-07-15 07:12:54.044981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.233 [2024-07-15 07:12:54.044994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.233 [2024-07-15 07:12:54.045005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.233 [2024-07-15 07:12:54.045014] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.233 [2024-07-15 07:12:54.045126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.233 [2024-07-15 07:12:54.045781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.233 [2024-07-15 07:12:54.045940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.233 [2024-07-15 07:12:54.045932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.233 [2024-07-15 07:12:54.078428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:46.168 07:12:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.168 07:12:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:08:46.168 07:12:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.168 07:12:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.168 07:12:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.168 07:12:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.168 07:12:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.168 [2024-07-15 07:12:55.016122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.168 07:12:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:46.426 Malloc0 00:08:46.426 07:12:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:46.685 07:12:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.250 07:12:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.507 [2024-07-15 07:12:56.270101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.507 07:12:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:47.766 [2024-07-15 07:12:56.610978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:47.766 07:12:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:48.024 07:12:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:48.024 07:12:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.024 07:12:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.024 07:12:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.024 07:12:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.024 07:12:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67043 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:50.547 07:12:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:50.547 [global] 00:08:50.547 thread=1 00:08:50.547 invalidate=1 00:08:50.547 rw=randrw 00:08:50.547 time_based=1 00:08:50.547 runtime=6 00:08:50.547 ioengine=libaio 00:08:50.547 direct=1 00:08:50.547 bs=4096 00:08:50.547 iodepth=128 00:08:50.547 norandommap=0 00:08:50.547 numjobs=1 00:08:50.547 00:08:50.547 verify_dump=1 00:08:50.547 verify_backlog=512 00:08:50.547 verify_state_save=0 00:08:50.547 do_verify=1 00:08:50.547 verify=crc32c-intel 00:08:50.547 [job0] 00:08:50.547 filename=/dev/nvme0n1 00:08:50.547 Could not set queue depth (nvme0n1) 00:08:50.547 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.547 fio-3.35 00:08:50.547 Starting 1 thread 00:08:51.117 07:12:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:51.687 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:51.945 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:51.945 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:51.945 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.945 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:51.946 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:52.205 07:13:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:52.463 07:13:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67043 00:08:56.648 00:08:56.648 job0: (groupid=0, jobs=1): err= 0: pid=67064: Mon Jul 15 07:13:05 2024 00:08:56.648 read: IOPS=8509, BW=33.2MiB/s (34.9MB/s)(200MiB/6003msec) 00:08:56.648 slat (usec): min=6, max=12202, avg=69.44, stdev=321.30 00:08:56.648 clat (usec): min=581, max=32876, avg=10339.18, stdev=3702.37 00:08:56.648 lat (usec): min=605, max=32895, avg=10408.62, stdev=3725.73 00:08:56.648 clat percentiles (usec): 00:08:56.648 | 1.00th=[ 4817], 5.00th=[ 7111], 10.00th=[ 7701], 20.00th=[ 8160], 00:08:56.648 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:08:56.648 | 70.00th=[10552], 80.00th=[11600], 90.00th=[14615], 95.00th=[19530], 00:08:56.648 | 99.00th=[23725], 99.50th=[29230], 99.90th=[31589], 99.95th=[32375], 00:08:56.648 | 99.99th=[32900] 00:08:56.648 bw ( KiB/s): min= 8984, max=26280, per=54.75%, avg=18634.91, stdev=5936.94, samples=11 00:08:56.648 iops : min= 2246, max= 6570, avg=4658.73, stdev=1484.23, samples=11 00:08:56.648 write: IOPS=4983, BW=19.5MiB/s (20.4MB/s)(101MiB/5187msec); 0 zone resets 00:08:56.648 slat (usec): min=13, max=4153, avg=81.80, stdev=219.21 00:08:56.648 clat (usec): min=1037, max=32765, avg=9035.24, stdev=3179.08 00:08:56.648 lat (usec): min=1078, max=32822, avg=9117.04, stdev=3201.55 00:08:56.648 clat percentiles (usec): 00:08:56.648 | 1.00th=[ 3818], 5.00th=[ 5276], 10.00th=[ 6587], 20.00th=[ 7242], 00:08:56.648 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8717], 00:08:56.648 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[12125], 95.00th=[17433], 00:08:56.648 | 99.00th=[20317], 99.50th=[20841], 99.90th=[26346], 99.95th=[28181], 00:08:56.648 | 99.99th=[30802] 00:08:56.648 bw ( KiB/s): min= 8536, max=26712, per=93.51%, avg=18642.18, stdev=5984.47, samples=11 00:08:56.649 iops : min= 2134, max= 6678, avg=4660.55, stdev=1496.12, samples=11 00:08:56.649 lat (usec) : 750=0.01% 00:08:56.649 lat (msec) : 2=0.03%, 4=0.67%, 10=68.41%, 20=27.61%, 50=3.26% 00:08:56.649 cpu : usr=5.18%, sys=21.22%, ctx=4447, majf=0, minf=133 00:08:56.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:56.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.649 issued rwts: total=51081,25852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.649 00:08:56.649 Run status group 0 (all jobs): 00:08:56.649 READ: bw=33.2MiB/s (34.9MB/s), 33.2MiB/s-33.2MiB/s (34.9MB/s-34.9MB/s), io=200MiB (209MB), run=6003-6003msec 00:08:56.649 WRITE: bw=19.5MiB/s (20.4MB/s), 19.5MiB/s-19.5MiB/s (20.4MB/s-20.4MB/s), io=101MiB (106MB), run=5187-5187msec 00:08:56.649 00:08:56.649 Disk stats (read/write): 00:08:56.649 nvme0n1: ios=49721/25852, merge=0/0, ticks=494061/219054, in_queue=713115, util=98.58% 00:08:56.649 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:56.906 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67145 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:57.164 07:13:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:57.164 [global] 00:08:57.164 thread=1 00:08:57.164 invalidate=1 00:08:57.164 rw=randrw 00:08:57.164 time_based=1 00:08:57.164 runtime=6 00:08:57.164 ioengine=libaio 00:08:57.164 direct=1 00:08:57.164 bs=4096 00:08:57.164 iodepth=128 00:08:57.164 norandommap=0 00:08:57.164 numjobs=1 00:08:57.164 00:08:57.164 verify_dump=1 00:08:57.164 verify_backlog=512 00:08:57.164 verify_state_save=0 00:08:57.164 do_verify=1 00:08:57.164 verify=crc32c-intel 00:08:57.164 [job0] 00:08:57.164 filename=/dev/nvme0n1 00:08:57.164 Could not set queue depth (nvme0n1) 00:08:57.164 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:57.164 fio-3.35 00:08:57.164 Starting 1 thread 00:08:58.095 07:13:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:58.660 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:58.917 07:13:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:59.480 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:59.737 07:13:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67145 00:09:03.917 00:09:03.917 job0: (groupid=0, jobs=1): err= 0: pid=67170: Mon Jul 15 07:13:12 2024 00:09:03.917 read: IOPS=10.3k, BW=40.1MiB/s (42.0MB/s)(240MiB/6002msec) 00:09:03.917 slat (usec): min=3, max=9888, avg=48.83, stdev=224.73 00:09:03.917 clat (usec): min=324, max=30943, avg=8703.24, stdev=3533.52 00:09:03.917 lat (usec): min=333, max=30984, avg=8752.06, stdev=3554.48 00:09:03.917 clat percentiles (usec): 00:09:03.917 | 1.00th=[ 1336], 5.00th=[ 3261], 10.00th=[ 4555], 20.00th=[ 5932], 00:09:03.917 | 30.00th=[ 7439], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8979], 00:09:03.917 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[12256], 95.00th=[15926], 00:09:03.917 | 99.00th=[19268], 99.50th=[20841], 99.90th=[28705], 99.95th=[29492], 00:09:03.918 | 99.99th=[30278] 00:09:03.918 bw ( KiB/s): min= 2256, max=37040, per=52.20%, avg=21414.55, stdev=9780.94, samples=11 00:09:03.918 iops : min= 564, max= 9260, avg=5353.64, stdev=2445.24, samples=11 00:09:03.918 write: IOPS=6377, BW=24.9MiB/s (26.1MB/s)(128MiB/5136msec); 0 zone resets 00:09:03.918 slat (usec): min=13, max=6479, avg=59.84, stdev=136.16 00:09:03.918 clat (usec): min=337, max=27766, avg=6900.43, stdev=2706.36 00:09:03.918 lat (usec): min=377, max=27805, avg=6960.27, stdev=2719.79 00:09:03.918 clat percentiles (usec): 00:09:03.918 | 1.00th=[ 1123], 5.00th=[ 2835], 10.00th=[ 3589], 20.00th=[ 4424], 00:09:03.918 | 30.00th=[ 5145], 40.00th=[ 6390], 50.00th=[ 7308], 60.00th=[ 7767], 00:09:03.918 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[ 9896], 95.00th=[10683], 00:09:03.918 | 99.00th=[15926], 99.50th=[16909], 99.90th=[18482], 99.95th=[19006], 00:09:03.918 | 99.99th=[26608] 00:09:03.918 bw ( KiB/s): min= 2272, max=37928, per=84.15%, avg=21465.45, stdev=9758.09, samples=11 00:09:03.918 iops : min= 568, max= 9482, avg=5366.36, stdev=2439.52, samples=11 00:09:03.918 lat (usec) : 500=0.05%, 750=0.16%, 1000=0.34% 00:09:03.918 lat (msec) : 2=1.54%, 4=7.77%, 10=70.10%, 20=19.56%, 50=0.48% 00:09:03.918 cpu : usr=6.31%, sys=26.29%, ctx=6068, majf=0, minf=96 00:09:03.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.918 issued rwts: total=61552,32754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.918 00:09:03.918 Run status group 0 (all jobs): 00:09:03.918 READ: bw=40.1MiB/s (42.0MB/s), 40.1MiB/s-40.1MiB/s (42.0MB/s-42.0MB/s), io=240MiB (252MB), run=6002-6002msec 00:09:03.918 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=128MiB (134MB), run=5136-5136msec 00:09:03.918 00:09:03.918 Disk stats (read/write): 00:09:03.918 nvme0n1: ios=60834/32003, merge=0/0, ticks=503603/201615, in_queue=705218, util=98.63% 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.918 rmmod nvme_tcp 00:09:03.918 rmmod nvme_fabrics 00:09:03.918 rmmod nvme_keyring 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66948 ']' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66948 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 66948 ']' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 66948 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66948 00:09:03.918 killing process with pid 66948 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66948' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 66948 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 66948 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:03.918 00:09:03.918 real 0m19.585s 00:09:03.918 user 1m13.870s 00:09:03.918 sys 0m10.442s 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.918 ************************************ 00:09:03.918 END TEST nvmf_target_multipath 00:09:03.918 ************************************ 00:09:03.918 07:13:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:04.177 07:13:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.177 07:13:12 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.177 07:13:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.177 07:13:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.177 07:13:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.177 ************************************ 00:09:04.177 START TEST nvmf_zcopy 00:09:04.177 ************************************ 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.177 * Looking for test storage... 00:09:04.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.177 07:13:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:04.177 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:04.178 Cannot find device "nvmf_tgt_br" 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.178 Cannot find device "nvmf_tgt_br2" 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:04.178 Cannot find device "nvmf_tgt_br" 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:04.178 Cannot find device "nvmf_tgt_br2" 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:04.178 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:04.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:04.435 00:09:04.435 --- 10.0.0.2 ping statistics --- 00:09:04.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.435 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:04.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:09:04.435 00:09:04.435 --- 10.0.0.3 ping statistics --- 00:09:04.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.435 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:09:04.435 00:09:04.435 --- 10.0.0.1 ping statistics --- 00:09:04.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.435 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67414 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67414 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67414 ']' 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.435 07:13:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.693 [2024-07-15 07:13:13.431495] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:09:04.693 [2024-07-15 07:13:13.432292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.693 [2024-07-15 07:13:13.570520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.693 [2024-07-15 07:13:13.627937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.693 [2024-07-15 07:13:13.627985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.693 [2024-07-15 07:13:13.627996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.693 [2024-07-15 07:13:13.628004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.693 [2024-07-15 07:13:13.628012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.693 [2024-07-15 07:13:13.628035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.950 [2024-07-15 07:13:13.656887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.516 [2024-07-15 07:13:14.446491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.516 [2024-07-15 07:13:14.462575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.516 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.774 malloc0 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:05.774 { 00:09:05.774 "params": { 00:09:05.774 "name": "Nvme$subsystem", 00:09:05.774 "trtype": "$TEST_TRANSPORT", 00:09:05.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.774 "adrfam": "ipv4", 00:09:05.774 "trsvcid": "$NVMF_PORT", 00:09:05.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.774 "hdgst": ${hdgst:-false}, 00:09:05.774 "ddgst": ${ddgst:-false} 00:09:05.774 }, 00:09:05.774 "method": "bdev_nvme_attach_controller" 00:09:05.774 } 00:09:05.774 EOF 00:09:05.774 )") 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:05.774 07:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:05.774 "params": { 00:09:05.774 "name": "Nvme1", 00:09:05.774 "trtype": "tcp", 00:09:05.774 "traddr": "10.0.0.2", 00:09:05.774 "adrfam": "ipv4", 00:09:05.774 "trsvcid": "4420", 00:09:05.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:05.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:05.774 "hdgst": false, 00:09:05.774 "ddgst": false 00:09:05.774 }, 00:09:05.774 "method": "bdev_nvme_attach_controller" 00:09:05.774 }' 00:09:05.774 [2024-07-15 07:13:14.557759] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:09:05.774 [2024-07-15 07:13:14.557884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67447 ] 00:09:05.774 [2024-07-15 07:13:14.701098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.033 [2024-07-15 07:13:14.762451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.033 [2024-07-15 07:13:14.801467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.033 Running I/O for 10 seconds... 00:09:15.996 00:09:15.996 Latency(us) 00:09:15.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:15.996 Verification LBA range: start 0x0 length 0x1000 00:09:15.996 Nvme1n1 : 10.01 5130.67 40.08 0.00 0.00 24871.39 1549.03 31695.59 00:09:15.996 =================================================================================================================== 00:09:15.996 Total : 5130.67 40.08 0.00 0.00 24871.39 1549.03 31695.59 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67563 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.255 { 00:09:16.255 "params": { 00:09:16.255 "name": "Nvme$subsystem", 00:09:16.255 "trtype": "$TEST_TRANSPORT", 00:09:16.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.255 "adrfam": "ipv4", 00:09:16.255 "trsvcid": "$NVMF_PORT", 00:09:16.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.255 "hdgst": ${hdgst:-false}, 00:09:16.255 "ddgst": ${ddgst:-false} 00:09:16.255 }, 00:09:16.255 "method": "bdev_nvme_attach_controller" 00:09:16.255 } 00:09:16.255 EOF 00:09:16.255 )") 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:16.255 [2024-07-15 07:13:25.095600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-07-15 07:13:25.095645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:16.255 07:13:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.255 "params": { 00:09:16.255 "name": "Nvme1", 00:09:16.255 "trtype": "tcp", 00:09:16.255 "traddr": "10.0.0.2", 00:09:16.255 "adrfam": "ipv4", 00:09:16.255 "trsvcid": "4420", 00:09:16.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.255 "hdgst": false, 00:09:16.255 "ddgst": false 00:09:16.255 }, 00:09:16.255 "method": "bdev_nvme_attach_controller" 00:09:16.255 }' 00:09:16.255 [2024-07-15 07:13:25.103589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-07-15 07:13:25.103631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-07-15 07:13:25.111583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-07-15 07:13:25.111624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-07-15 07:13:25.123597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-07-15 07:13:25.123654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-07-15 07:13:25.135582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-07-15 07:13:25.135621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-07-15 07:13:25.147581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-07-15 07:13:25.147619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-07-15 07:13:25.152123] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:09:16.255 [2024-07-15 07:13:25.152239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67563 ] 00:09:16.256 [2024-07-15 07:13:25.159582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.256 [2024-07-15 07:13:25.159617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.256 [2024-07-15 07:13:25.171607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.256 [2024-07-15 07:13:25.171649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.256 [2024-07-15 07:13:25.183623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.256 [2024-07-15 07:13:25.183671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.256 [2024-07-15 07:13:25.195634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.256 [2024-07-15 07:13:25.195685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.256 [2024-07-15 07:13:25.207629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.256 [2024-07-15 07:13:25.207684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.219622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.219669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.231625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.231673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.243633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.243689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.255627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.255676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.267633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.267684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.279638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.279686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.291638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.291687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.292528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.514 [2024-07-15 07:13:25.303655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.303709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.315659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.315713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.327653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.327706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.339670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.339725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.351680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.351735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.363751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.363842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.375743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.375821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.381307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.514 [2024-07-15 07:13:25.387729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.387806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.399765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.399848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.411770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.411854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.423747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.423823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.426673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.514 [2024-07-15 07:13:25.435774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.435851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.443757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.443829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.514 [2024-07-15 07:13:25.455753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.514 [2024-07-15 07:13:25.455829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.467777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.467860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.479793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.479868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.491829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.491923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.499796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.499861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.507803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.507873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.515818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.515886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.523766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.523813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 Running I/O for 5 seconds... 00:09:16.831 [2024-07-15 07:13:25.531778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.531827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.546750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.546831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.560879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.560968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.577365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.577454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.594161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.594246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.609841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.609926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.625796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.625889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.643880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.643976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.657961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.658042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.675264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.675357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.693168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.693257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.706274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.706352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.726680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.726774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.749327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.749436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.831 [2024-07-15 07:13:25.764403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.831 [2024-07-15 07:13:25.764498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.780773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.780875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.798582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.798682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.815879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.815973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.828928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.829023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.848098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.848202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.866031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.866140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.880099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.880185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.898253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.898325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.911696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.911790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.927332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.927405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.942257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.942325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.957888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.957970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.971323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.971410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:25.991808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:25.991894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:26.006475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:26.006574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:26.024466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:26.024563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.089 [2024-07-15 07:13:26.039760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.089 [2024-07-15 07:13:26.039848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.058591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.058684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.073600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.073698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.089159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.089238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.109001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.109138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.123306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.123413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.141249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.141314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.157673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.157746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.174007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.174114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.188272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.188352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.360 [2024-07-15 07:13:26.201716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.360 [2024-07-15 07:13:26.201787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.361 [2024-07-15 07:13:26.219487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.361 [2024-07-15 07:13:26.219576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.361 [2024-07-15 07:13:26.233758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.361 [2024-07-15 07:13:26.233844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.361 [2024-07-15 07:13:26.249932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.361 [2024-07-15 07:13:26.250030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.361 [2024-07-15 07:13:26.267980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.361 [2024-07-15 07:13:26.268102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.361 [2024-07-15 07:13:26.283999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.361 [2024-07-15 07:13:26.284116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.361 [2024-07-15 07:13:26.297528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.361 [2024-07-15 07:13:26.297612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.361 [2024-07-15 07:13:26.313997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.361 [2024-07-15 07:13:26.314116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.332012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.332135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.348448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.348543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.361854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.361952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.381845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.381945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.396974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.397111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.415096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.415201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.428893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.428981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.448058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.448175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.463273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.463365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.480471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.480572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.494255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.494348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.510856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.510954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.527612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.527709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.544019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.619 [2024-07-15 07:13:26.544131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.619 [2024-07-15 07:13:26.555231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.620 [2024-07-15 07:13:26.555331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.620 [2024-07-15 07:13:26.571946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.620 [2024-07-15 07:13:26.572048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.587127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.587231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.602837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.602931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.618120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.618215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.634271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.634344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.647963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.648050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.666928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.667032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.684416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.684520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.700821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.700922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.717512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.717620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.731208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.731314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.750497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.750589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.765275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.765362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.781801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.781867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.798963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.799032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.815193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.815321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.878 [2024-07-15 07:13:26.828620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.878 [2024-07-15 07:13:26.828722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.847904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.848012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.863725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.863824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.879491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.879603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.896659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.896767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.911307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.911400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.928064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.928182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.945594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.945695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.962828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.962932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.976381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.976468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:26.991425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:26.991496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:27.006667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:27.006735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:27.019432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:27.019530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:27.039478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:27.039566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:27.057203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:27.057293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:27.072686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:27.072746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.136 [2024-07-15 07:13:27.088875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.136 [2024-07-15 07:13:27.088963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.105693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.105779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.123709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.123814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.137298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.137387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.157030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.157144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.172050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.172159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.190175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.190281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.207794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.207889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.221385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.221440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.236785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.236853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.251109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.251205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.266441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.266537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.279001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.279067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.294870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.294940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.309799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.309866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.326036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.326118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.394 [2024-07-15 07:13:27.342646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.394 [2024-07-15 07:13:27.342719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.358393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.358461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.373561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.373631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.388040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.388140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.403998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.404134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.420611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.420702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.435962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.436050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.449721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.449820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.468367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.468477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.483439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.483529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.502105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.502203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.516637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.516733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.535047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.535168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.551127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.551203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.560910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.560979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.575698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.575782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.652 [2024-07-15 07:13:27.588718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.652 [2024-07-15 07:13:27.588813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.910 [2024-07-15 07:13:27.607551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.910 [2024-07-15 07:13:27.607637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.910 [2024-07-15 07:13:27.622695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.910 [2024-07-15 07:13:27.622765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.910 [2024-07-15 07:13:27.637261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.910 [2024-07-15 07:13:27.637330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.910 [2024-07-15 07:13:27.655691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.910 [2024-07-15 07:13:27.655764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.910 [2024-07-15 07:13:27.669989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.910 [2024-07-15 07:13:27.670092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.910 [2024-07-15 07:13:27.685019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.910 [2024-07-15 07:13:27.685134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.701107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.701184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.717602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.717686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.733243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.733352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.746396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.746501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.763880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.763956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.780027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.780153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.793288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.793374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.812129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.812224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.826544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.826657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.844840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.844946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.911 [2024-07-15 07:13:27.861535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.911 [2024-07-15 07:13:27.861632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.874472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.874561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.894154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.894248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.911666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.911760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.925055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.925158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.944118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.944207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.961854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.961945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.978265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.978357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:27.991778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:27.991873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:28.008929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:28.009044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:28.025244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.169 [2024-07-15 07:13:28.025312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.169 [2024-07-15 07:13:28.038322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.170 [2024-07-15 07:13:28.038405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.170 [2024-07-15 07:13:28.056571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.170 [2024-07-15 07:13:28.056670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.170 [2024-07-15 07:13:28.073933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.170 [2024-07-15 07:13:28.074025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.170 [2024-07-15 07:13:28.087196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.170 [2024-07-15 07:13:28.087286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.170 [2024-07-15 07:13:28.107188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.170 [2024-07-15 07:13:28.107283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.125698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.125795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.140294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.140384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.155853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.155939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.170953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.171061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.189346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.189429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.203193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.203272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.219870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.219961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.235119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.235203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.251012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.251113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.269347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.269445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.286706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.286770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.302465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.302547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.317603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.317679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.334090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.334164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.350646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.350721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.428 [2024-07-15 07:13:28.367198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.428 [2024-07-15 07:13:28.367278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.385739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.385819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.398977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.399059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.413544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.413655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.431100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.431194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.447812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.447893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.461298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.461376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.476183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.476258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.490444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.490540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.506244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.506337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.521839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.521949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.537847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.537940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.555823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.555917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.569995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.570116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.586680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.586778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.601912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.602007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.618067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.618179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.686 [2024-07-15 07:13:28.636299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.686 [2024-07-15 07:13:28.636390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.654134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.654223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.668430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.668525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.685905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.686004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.702911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.702984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.719256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.719345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.735696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.735797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.748902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.749019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.768892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.769007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.784333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.784421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.800593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.800689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.814362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.814457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.830151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.830252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.847827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.847928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.861902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.862003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.877987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.878121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.945 [2024-07-15 07:13:28.893022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.945 [2024-07-15 07:13:28.893139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:28.909548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:28.909647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:28.922759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:28.922855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:28.941921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:28.942019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:28.956393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:28.956486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:28.972737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:28.972839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:28.990918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:28.991037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:29.004368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:29.004462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:29.023828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:29.023928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:29.040953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:29.041064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:29.054587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:29.054679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:29.070978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:29.071091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:29.088541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:29.088637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.203 [2024-07-15 07:13:29.101323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.203 [2024-07-15 07:13:29.101425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.204 [2024-07-15 07:13:29.119430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.204 [2024-07-15 07:13:29.119503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.204 [2024-07-15 07:13:29.135317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.204 [2024-07-15 07:13:29.135383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.204 [2024-07-15 07:13:29.152942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.204 [2024-07-15 07:13:29.153035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.167800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.167876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.179803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.179868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.197649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.197732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.215418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.215525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.229211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.229313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.244286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.244364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.259106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.259180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.278541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.278642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.296744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.296840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.311224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.311294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.328310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.328411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.345615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.345725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.359669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.359770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.378285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.378377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.392710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.392805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.462 [2024-07-15 07:13:29.411503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.462 [2024-07-15 07:13:29.411598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.425991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.426135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.442260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.442358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.459760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.459831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.482811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.482894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.511658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.511743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.547436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.547526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.584852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.584937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.621407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.621497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.650563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.650655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-07-15 07:13:29.667968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-07-15 07:13:29.668094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.686498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.686600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.700689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.700785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.717043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.717144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.732978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.733088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.744264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.744368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.757472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.757546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.774443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.774544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.791569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.791672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.805328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.805433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.822052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.835842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.853807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.853912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.867306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.867403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.886895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.887003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.901860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.901960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.978 [2024-07-15 07:13:29.919474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.978 [2024-07-15 07:13:29.919574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:29.935528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:29.935624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:29.947117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:29.947219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:29.963960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:29.964061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:29.981283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:29.981372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:29.998436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:29.998521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.014905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.014978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.028001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.028120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.046168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.046257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.063011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.063124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.076295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.089143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.106506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.113162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.135975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.141195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.157025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.158755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.236 [2024-07-15 07:13:30.177675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.236 [2024-07-15 07:13:30.177797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.192682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.192774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.207458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.207552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.225254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.225343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.238843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.238928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.258699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.258814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.275491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.275560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.293697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.293781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.310735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.310814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.326727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.326816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.339761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.339848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.356309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.356397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.373439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.373521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.389769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.389848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.402730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.402817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.421377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.421470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.494 [2024-07-15 07:13:30.438656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.494 [2024-07-15 07:13:30.438746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.752 [2024-07-15 07:13:30.453147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.752 [2024-07-15 07:13:30.453236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.752 [2024-07-15 07:13:30.467552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.467632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.482218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.482309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.499330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.499403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.512356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.512438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.530390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.530470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.543221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.543305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 00:09:21.753 Latency(us) 00:09:21.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.753 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:21.753 Nvme1n1 : 5.01 8411.05 65.71 0.00 0.00 15196.51 4766.25 57671.68 00:09:21.753 =================================================================================================================== 00:09:21.753 Total : 8411.05 65.71 0.00 0.00 15196.51 4766.25 57671.68 00:09:21.753 [2024-07-15 07:13:30.554713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.554785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.566662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.566729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.578684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.578768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.590674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.590740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.602709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.602798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.614738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.614826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.626706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.626793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.638662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.638724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.650696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.650774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.662670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.662727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.674700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.674774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.686703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.686782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.753 [2024-07-15 07:13:30.698674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.753 [2024-07-15 07:13:30.698732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.011 [2024-07-15 07:13:30.710726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.011 [2024-07-15 07:13:30.710808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.011 [2024-07-15 07:13:30.722743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.011 [2024-07-15 07:13:30.722819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.011 [2024-07-15 07:13:30.734719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.011 [2024-07-15 07:13:30.734789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.011 [2024-07-15 07:13:30.746709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.011 [2024-07-15 07:13:30.746771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.011 [2024-07-15 07:13:30.758719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.011 [2024-07-15 07:13:30.758783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.011 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67563) - No such process 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67563 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.011 delay0 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.011 07:13:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:22.011 [2024-07-15 07:13:30.957515] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:28.653 Initializing NVMe Controllers 00:09:28.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:28.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:28.653 Initialization complete. Launching workers. 00:09:28.653 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 74 00:09:28.653 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 361, failed to submit 33 00:09:28.653 success 233, unsuccess 128, failed 0 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.653 rmmod nvme_tcp 00:09:28.653 rmmod nvme_fabrics 00:09:28.653 rmmod nvme_keyring 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67414 ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67414 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67414 ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67414 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67414 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67414' 00:09:28.653 killing process with pid 67414 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67414 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67414 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:28.653 00:09:28.653 real 0m24.467s 00:09:28.653 user 0m39.158s 00:09:28.653 sys 0m7.101s 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.653 ************************************ 00:09:28.653 END TEST nvmf_zcopy 00:09:28.653 07:13:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.653 ************************************ 00:09:28.653 07:13:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.653 07:13:37 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:28.653 07:13:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.653 07:13:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.653 07:13:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.653 ************************************ 00:09:28.653 START TEST nvmf_nmic 00:09:28.653 ************************************ 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:28.653 * Looking for test storage... 00:09:28.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:28.653 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:28.654 Cannot find device "nvmf_tgt_br" 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.654 Cannot find device "nvmf_tgt_br2" 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:28.654 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:28.913 Cannot find device "nvmf_tgt_br" 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:28.913 Cannot find device "nvmf_tgt_br2" 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:28.913 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:29.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:09:29.172 00:09:29.172 --- 10.0.0.2 ping statistics --- 00:09:29.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.172 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:29.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:29.172 00:09:29.172 --- 10.0.0.3 ping statistics --- 00:09:29.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.172 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:29.172 07:13:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:09:29.172 00:09:29.172 --- 10.0.0.1 ping statistics --- 00:09:29.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.172 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67882 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67882 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 67882 ']' 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.172 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.431 [2024-07-15 07:13:38.144432] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:09:29.431 [2024-07-15 07:13:38.144557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.431 [2024-07-15 07:13:38.292870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.431 [2024-07-15 07:13:38.379125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.431 [2024-07-15 07:13:38.379497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.431 [2024-07-15 07:13:38.379774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.431 [2024-07-15 07:13:38.379881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.431 [2024-07-15 07:13:38.379936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.431 [2024-07-15 07:13:38.380112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.431 [2024-07-15 07:13:38.381262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.697 [2024-07-15 07:13:38.384877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.697 [2024-07-15 07:13:38.385451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.697 [2024-07-15 07:13:38.431866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.697 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.697 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:29.697 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.697 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.697 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.957 [2024-07-15 07:13:38.659520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.957 Malloc0 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.957 [2024-07-15 07:13:38.726369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.957 test case1: single bdev can't be used in multiple subsystems 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.957 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.958 [2024-07-15 07:13:38.766171] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:29.958 [2024-07-15 07:13:38.781171] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:29.958 [2024-07-15 07:13:38.781213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.958 request: 00:09:29.958 { 00:09:29.958 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:29.958 "namespace": { 00:09:29.958 "bdev_name": "Malloc0", 00:09:29.958 "no_auto_visible": false 00:09:29.958 }, 00:09:29.958 "method": "nvmf_subsystem_add_ns", 00:09:29.958 "req_id": 1 00:09:29.958 } 00:09:29.958 Got JSON-RPC error response 00:09:29.958 response: 00:09:29.958 { 00:09:29.958 "code": -32602, 00:09:29.958 "message": "Invalid parameters" 00:09:29.958 } 00:09:29.958 Adding namespace failed - expected result. 00:09:29.958 test case2: host connect to nvmf target in multiple paths 00:09:29.958 [2024-07-15 07:13:38.797969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.958 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.216 07:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:30.216 07:13:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.216 07:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.216 07:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.216 07:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.216 07:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.748 07:13:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.748 07:13:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.748 07:13:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.748 07:13:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.748 07:13:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.748 07:13:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:32.748 07:13:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:32.748 [global] 00:09:32.748 thread=1 00:09:32.748 invalidate=1 00:09:32.748 rw=write 00:09:32.748 time_based=1 00:09:32.748 runtime=1 00:09:32.748 ioengine=libaio 00:09:32.748 direct=1 00:09:32.748 bs=4096 00:09:32.748 iodepth=1 00:09:32.748 norandommap=0 00:09:32.748 numjobs=1 00:09:32.748 00:09:32.748 verify_dump=1 00:09:32.748 verify_backlog=512 00:09:32.748 verify_state_save=0 00:09:32.748 do_verify=1 00:09:32.748 verify=crc32c-intel 00:09:32.748 [job0] 00:09:32.748 filename=/dev/nvme0n1 00:09:32.748 Could not set queue depth (nvme0n1) 00:09:32.748 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.748 fio-3.35 00:09:32.748 Starting 1 thread 00:09:33.681 00:09:33.681 job0: (groupid=0, jobs=1): err= 0: pid=67966: Mon Jul 15 07:13:42 2024 00:09:33.681 read: IOPS=1860, BW=7441KiB/s (7619kB/s)(7448KiB/1001msec) 00:09:33.681 slat (usec): min=12, max=2166, avg=26.55, stdev=50.19 00:09:33.681 clat (usec): min=141, max=20769, avg=266.08, stdev=667.84 00:09:33.681 lat (usec): min=159, max=20801, avg=292.63, stdev=671.12 00:09:33.681 clat percentiles (usec): 00:09:33.681 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 204], 00:09:33.681 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:09:33.681 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 293], 00:09:33.681 | 99.00th=[ 355], 99.50th=[ 537], 99.90th=[13042], 99.95th=[20841], 00:09:33.681 | 99.99th=[20841] 00:09:33.681 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:33.681 slat (usec): min=18, max=12188, avg=46.07, stdev=350.70 00:09:33.681 clat (usec): min=45, max=10675, avg=170.47, stdev=260.77 00:09:33.681 lat (usec): min=113, max=12233, avg=216.54, stdev=437.10 00:09:33.681 clat percentiles (usec): 00:09:33.681 | 1.00th=[ 101], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 137], 00:09:33.681 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 163], 00:09:33.681 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 210], 00:09:33.681 | 99.00th=[ 293], 99.50th=[ 469], 99.90th=[ 2409], 99.95th=[ 4424], 00:09:33.681 | 99.99th=[10683] 00:09:33.681 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:33.681 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:33.681 lat (usec) : 50=0.03%, 100=0.49%, 250=82.25%, 500=16.73%, 750=0.23% 00:09:33.681 lat (usec) : 1000=0.08% 00:09:33.681 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03%, 20=0.10%, 50=0.03% 00:09:33.681 cpu : usr=1.90%, sys=10.20%, ctx=3915, majf=0, minf=2 00:09:33.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.682 issued rwts: total=1862,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.682 00:09:33.682 Run status group 0 (all jobs): 00:09:33.682 READ: bw=7441KiB/s (7619kB/s), 7441KiB/s-7441KiB/s (7619kB/s-7619kB/s), io=7448KiB (7627kB), run=1001-1001msec 00:09:33.682 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:33.682 00:09:33.682 Disk stats (read/write): 00:09:33.682 nvme0n1: ios=1586/1929, merge=0/0, ticks=442/386, in_queue=828, util=90.38% 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.682 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.682 rmmod nvme_tcp 00:09:33.682 rmmod nvme_fabrics 00:09:33.940 rmmod nvme_keyring 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67882 ']' 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67882 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 67882 ']' 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 67882 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67882 00:09:33.940 killing process with pid 67882 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67882' 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 67882 00:09:33.940 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 67882 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:34.199 00:09:34.199 real 0m5.555s 00:09:34.199 user 0m15.798s 00:09:34.199 sys 0m2.824s 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.199 ************************************ 00:09:34.199 END TEST nvmf_nmic 00:09:34.199 ************************************ 00:09:34.199 07:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.199 07:13:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:34.199 07:13:43 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:34.199 07:13:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:34.199 07:13:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.199 07:13:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.199 ************************************ 00:09:34.199 START TEST nvmf_fio_target 00:09:34.199 ************************************ 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:34.199 * Looking for test storage... 00:09:34.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.199 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:34.200 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:34.457 Cannot find device "nvmf_tgt_br" 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.457 Cannot find device "nvmf_tgt_br2" 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:34.457 Cannot find device "nvmf_tgt_br" 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:34.457 Cannot find device "nvmf_tgt_br2" 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.457 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:34.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:09:34.715 00:09:34.715 --- 10.0.0.2 ping statistics --- 00:09:34.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.715 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:34.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:09:34.715 00:09:34.715 --- 10.0.0.3 ping statistics --- 00:09:34.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.715 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:09:34.715 00:09:34.715 --- 10.0.0.1 ping statistics --- 00:09:34.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.715 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.715 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68144 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68144 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68144 ']' 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.716 07:13:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.976 [2024-07-15 07:13:43.691206] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:09:34.976 [2024-07-15 07:13:43.691633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.976 [2024-07-15 07:13:43.848677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.234 [2024-07-15 07:13:43.936709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.234 [2024-07-15 07:13:43.937130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.234 [2024-07-15 07:13:43.937334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.234 [2024-07-15 07:13:43.937546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.234 [2024-07-15 07:13:43.937568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.234 [2024-07-15 07:13:43.937671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.234 [2024-07-15 07:13:43.937753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.234 [2024-07-15 07:13:43.938386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.234 [2024-07-15 07:13:43.943844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.234 [2024-07-15 07:13:43.993835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.234 07:13:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.234 07:13:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:35.234 07:13:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.234 07:13:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.234 07:13:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.234 07:13:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.234 07:13:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:35.798 [2024-07-15 07:13:44.497418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.798 07:13:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.055 07:13:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:36.055 07:13:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.312 07:13:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:36.312 07:13:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.877 07:13:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:36.877 07:13:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.164 07:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:37.164 07:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:37.741 07:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.000 07:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:38.000 07:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.258 07:13:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:38.258 07:13:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.825 07:13:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:38.826 07:13:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:39.084 07:13:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.342 07:13:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.342 07:13:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.908 07:13:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.908 07:13:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:40.166 07:13:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.424 [2024-07-15 07:13:49.242712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.424 07:13:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:40.682 07:13:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:40.966 07:13:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.224 07:13:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:41.224 07:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.224 07:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.224 07:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:41.224 07:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:41.224 07:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.131 07:13:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.131 07:13:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.131 07:13:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.131 07:13:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:43.131 07:13:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.131 07:13:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:43.131 07:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:43.131 [global] 00:09:43.131 thread=1 00:09:43.131 invalidate=1 00:09:43.131 rw=write 00:09:43.131 time_based=1 00:09:43.131 runtime=1 00:09:43.131 ioengine=libaio 00:09:43.132 direct=1 00:09:43.132 bs=4096 00:09:43.132 iodepth=1 00:09:43.132 norandommap=0 00:09:43.132 numjobs=1 00:09:43.132 00:09:43.132 verify_dump=1 00:09:43.132 verify_backlog=512 00:09:43.132 verify_state_save=0 00:09:43.132 do_verify=1 00:09:43.132 verify=crc32c-intel 00:09:43.132 [job0] 00:09:43.132 filename=/dev/nvme0n1 00:09:43.132 [job1] 00:09:43.132 filename=/dev/nvme0n2 00:09:43.132 [job2] 00:09:43.132 filename=/dev/nvme0n3 00:09:43.132 [job3] 00:09:43.132 filename=/dev/nvme0n4 00:09:43.390 Could not set queue depth (nvme0n1) 00:09:43.390 Could not set queue depth (nvme0n2) 00:09:43.390 Could not set queue depth (nvme0n3) 00:09:43.390 Could not set queue depth (nvme0n4) 00:09:43.390 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.390 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.390 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.390 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.390 fio-3.35 00:09:43.390 Starting 4 threads 00:09:44.762 00:09:44.762 job0: (groupid=0, jobs=1): err= 0: pid=68343: Mon Jul 15 07:13:53 2024 00:09:44.762 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:44.762 slat (usec): min=18, max=111, avg=25.83, stdev= 5.47 00:09:44.762 clat (usec): min=145, max=596, avg=228.24, stdev=31.68 00:09:44.762 lat (usec): min=166, max=623, avg=254.08, stdev=31.60 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 159], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:09:44.762 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:09:44.762 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 281], 00:09:44.762 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 457], 99.95th=[ 502], 00:09:44.762 | 99.99th=[ 594] 00:09:44.762 write: IOPS=2322, BW=9291KiB/s (9514kB/s)(9300KiB/1001msec); 0 zone resets 00:09:44.762 slat (usec): min=20, max=111, avg=38.37, stdev= 7.01 00:09:44.762 clat (usec): min=98, max=451, avg=161.70, stdev=27.27 00:09:44.762 lat (usec): min=129, max=490, avg=200.07, stdev=28.34 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 110], 5.00th=[ 123], 10.00th=[ 135], 20.00th=[ 147], 00:09:44.762 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:09:44.762 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 200], 00:09:44.762 | 99.00th=[ 241], 99.50th=[ 289], 99.90th=[ 412], 99.95th=[ 453], 00:09:44.762 | 99.99th=[ 453] 00:09:44.762 bw ( KiB/s): min= 8424, max= 8424, per=28.32%, avg=8424.00, stdev= 0.00, samples=1 00:09:44.762 iops : min= 2106, max= 2106, avg=2106.00, stdev= 0.00, samples=1 00:09:44.762 lat (usec) : 100=0.07%, 250=89.37%, 500=10.52%, 750=0.05% 00:09:44.762 cpu : usr=3.10%, sys=11.40%, ctx=4373, majf=0, minf=14 00:09:44.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 issued rwts: total=2048,2325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.762 job1: (groupid=0, jobs=1): err= 0: pid=68344: Mon Jul 15 07:13:53 2024 00:09:44.762 read: IOPS=1113, BW=4456KiB/s (4562kB/s)(4460KiB/1001msec) 00:09:44.762 slat (usec): min=14, max=118, avg=31.09, stdev= 7.19 00:09:44.762 clat (usec): min=173, max=4469, avg=381.39, stdev=226.40 00:09:44.762 lat (usec): min=191, max=4500, avg=412.48, stdev=227.83 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 221], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 273], 00:09:44.762 | 30.00th=[ 289], 40.00th=[ 310], 50.00th=[ 347], 60.00th=[ 375], 00:09:44.762 | 70.00th=[ 408], 80.00th=[ 457], 90.00th=[ 529], 95.00th=[ 611], 00:09:44.762 | 99.00th=[ 775], 99.50th=[ 1205], 99.90th=[ 4178], 99.95th=[ 4490], 00:09:44.762 | 99.99th=[ 4490] 00:09:44.762 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:44.762 slat (usec): min=20, max=139, avg=44.04, stdev=10.69 00:09:44.762 clat (usec): min=106, max=7031, avg=300.78, stdev=297.14 00:09:44.762 lat (usec): min=138, max=7080, avg=344.82, stdev=297.95 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 127], 5.00th=[ 172], 10.00th=[ 190], 20.00th=[ 217], 00:09:44.762 | 30.00th=[ 239], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 285], 00:09:44.762 | 70.00th=[ 310], 80.00th=[ 347], 90.00th=[ 416], 95.00th=[ 469], 00:09:44.762 | 99.00th=[ 545], 99.50th=[ 742], 99.90th=[ 6325], 99.95th=[ 7046], 00:09:44.762 | 99.99th=[ 7046] 00:09:44.762 bw ( KiB/s): min= 8192, max= 8192, per=27.54%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.762 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.762 lat (usec) : 250=24.71%, 500=67.94%, 750=6.64%, 1000=0.26% 00:09:44.762 lat (msec) : 2=0.15%, 4=0.11%, 10=0.19% 00:09:44.762 cpu : usr=2.70%, sys=7.60%, ctx=2652, majf=0, minf=11 00:09:44.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 issued rwts: total=1115,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.762 job2: (groupid=0, jobs=1): err= 0: pid=68345: Mon Jul 15 07:13:53 2024 00:09:44.762 read: IOPS=1076, BW=4308KiB/s (4411kB/s)(4312KiB/1001msec) 00:09:44.762 slat (usec): min=14, max=509, avg=29.38, stdev=16.79 00:09:44.762 clat (usec): min=209, max=7029, avg=380.38, stdev=248.35 00:09:44.762 lat (usec): min=235, max=7059, avg=409.76, stdev=250.35 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 231], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 277], 00:09:44.762 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 347], 60.00th=[ 379], 00:09:44.762 | 70.00th=[ 404], 80.00th=[ 449], 90.00th=[ 523], 95.00th=[ 594], 00:09:44.762 | 99.00th=[ 766], 99.50th=[ 848], 99.90th=[ 2769], 99.95th=[ 7046], 00:09:44.762 | 99.99th=[ 7046] 00:09:44.762 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:44.762 slat (usec): min=20, max=111, avg=42.38, stdev=10.70 00:09:44.762 clat (usec): min=125, max=5978, avg=315.35, stdev=249.05 00:09:44.762 lat (usec): min=155, max=6025, avg=357.73, stdev=250.99 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 165], 5.00th=[ 190], 10.00th=[ 206], 20.00th=[ 233], 00:09:44.762 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 293], 00:09:44.762 | 70.00th=[ 318], 80.00th=[ 371], 90.00th=[ 433], 95.00th=[ 486], 00:09:44.762 | 99.00th=[ 668], 99.50th=[ 1123], 99.90th=[ 4113], 99.95th=[ 5997], 00:09:44.762 | 99.99th=[ 5997] 00:09:44.762 bw ( KiB/s): min= 8192, max= 8192, per=27.54%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.762 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.762 lat (usec) : 250=18.55%, 500=73.72%, 750=6.92%, 1000=0.38% 00:09:44.762 lat (msec) : 2=0.04%, 4=0.23%, 10=0.15% 00:09:44.762 cpu : usr=2.00%, sys=7.60%, ctx=2614, majf=0, minf=11 00:09:44.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 issued rwts: total=1078,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.762 job3: (groupid=0, jobs=1): err= 0: pid=68346: Mon Jul 15 07:13:53 2024 00:09:44.762 read: IOPS=1933, BW=7732KiB/s (7918kB/s)(7740KiB/1001msec) 00:09:44.762 slat (usec): min=14, max=391, avg=26.57, stdev=13.83 00:09:44.762 clat (usec): min=4, max=7796, avg=262.86, stdev=336.08 00:09:44.762 lat (usec): min=169, max=7821, avg=289.43, stdev=336.40 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 192], 20.00th=[ 223], 00:09:44.762 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:09:44.762 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:09:44.762 | 99.00th=[ 359], 99.50th=[ 914], 99.90th=[ 7504], 99.95th=[ 7767], 00:09:44.762 | 99.99th=[ 7767] 00:09:44.762 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:44.762 slat (usec): min=16, max=119, avg=35.89, stdev= 9.25 00:09:44.762 clat (usec): min=103, max=4678, avg=172.79, stdev=141.69 00:09:44.762 lat (usec): min=123, max=4717, avg=208.68, stdev=142.77 00:09:44.762 clat percentiles (usec): 00:09:44.762 | 1.00th=[ 116], 5.00th=[ 125], 10.00th=[ 131], 20.00th=[ 143], 00:09:44.762 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:09:44.762 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 208], 00:09:44.762 | 99.00th=[ 253], 99.50th=[ 343], 99.90th=[ 3163], 99.95th=[ 3163], 00:09:44.762 | 99.99th=[ 4686] 00:09:44.762 bw ( KiB/s): min= 8448, max= 8448, per=28.40%, avg=8448.00, stdev= 0.00, samples=1 00:09:44.762 iops : min= 2112, max= 2112, avg=2112.00, stdev= 0.00, samples=1 00:09:44.762 lat (usec) : 10=0.03%, 250=80.34%, 500=19.16%, 750=0.10%, 1000=0.05% 00:09:44.762 lat (msec) : 2=0.05%, 4=0.18%, 10=0.10% 00:09:44.762 cpu : usr=2.70%, sys=9.70%, ctx=3989, majf=0, minf=5 00:09:44.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.762 issued rwts: total=1935,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.762 00:09:44.762 Run status group 0 (all jobs): 00:09:44.763 READ: bw=24.1MiB/s (25.3MB/s), 4308KiB/s-8184KiB/s (4411kB/s-8380kB/s), io=24.1MiB (25.3MB), run=1001-1001msec 00:09:44.763 WRITE: bw=29.1MiB/s (30.5MB/s), 6138KiB/s-9291KiB/s (6285kB/s-9514kB/s), io=29.1MiB (30.5MB), run=1001-1001msec 00:09:44.763 00:09:44.763 Disk stats (read/write): 00:09:44.763 nvme0n1: ios=1739/2048, merge=0/0, ticks=457/348, in_queue=805, util=90.18% 00:09:44.763 nvme0n2: ios=1068/1266, merge=0/0, ticks=419/403, in_queue=822, util=87.84% 00:09:44.763 nvme0n3: ios=1053/1249, merge=0/0, ticks=431/404, in_queue=835, util=90.44% 00:09:44.763 nvme0n4: ios=1566/1889, merge=0/0, ticks=514/330, in_queue=844, util=90.05% 00:09:44.763 07:13:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:44.763 [global] 00:09:44.763 thread=1 00:09:44.763 invalidate=1 00:09:44.763 rw=randwrite 00:09:44.763 time_based=1 00:09:44.763 runtime=1 00:09:44.763 ioengine=libaio 00:09:44.763 direct=1 00:09:44.763 bs=4096 00:09:44.763 iodepth=1 00:09:44.763 norandommap=0 00:09:44.763 numjobs=1 00:09:44.763 00:09:44.763 verify_dump=1 00:09:44.763 verify_backlog=512 00:09:44.763 verify_state_save=0 00:09:44.763 do_verify=1 00:09:44.763 verify=crc32c-intel 00:09:44.763 [job0] 00:09:44.763 filename=/dev/nvme0n1 00:09:44.763 [job1] 00:09:44.763 filename=/dev/nvme0n2 00:09:44.763 [job2] 00:09:44.763 filename=/dev/nvme0n3 00:09:44.763 [job3] 00:09:44.763 filename=/dev/nvme0n4 00:09:44.763 Could not set queue depth (nvme0n1) 00:09:44.763 Could not set queue depth (nvme0n2) 00:09:44.763 Could not set queue depth (nvme0n3) 00:09:44.763 Could not set queue depth (nvme0n4) 00:09:44.763 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.763 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.763 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.763 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.763 fio-3.35 00:09:44.763 Starting 4 threads 00:09:46.134 00:09:46.134 job0: (groupid=0, jobs=1): err= 0: pid=68399: Mon Jul 15 07:13:54 2024 00:09:46.134 read: IOPS=1031, BW=4128KiB/s (4227kB/s)(4132KiB/1001msec) 00:09:46.134 slat (usec): min=13, max=157, avg=29.46, stdev= 8.99 00:09:46.134 clat (usec): min=143, max=10125, avg=446.39, stdev=546.14 00:09:46.134 lat (usec): min=168, max=10146, avg=475.84, stdev=546.80 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 163], 5.00th=[ 219], 10.00th=[ 245], 20.00th=[ 273], 00:09:46.134 | 30.00th=[ 310], 40.00th=[ 343], 50.00th=[ 371], 60.00th=[ 392], 00:09:46.134 | 70.00th=[ 424], 80.00th=[ 482], 90.00th=[ 594], 95.00th=[ 676], 00:09:46.134 | 99.00th=[ 3326], 99.50th=[ 4359], 99.90th=[ 6783], 99.95th=[10159], 00:09:46.134 | 99.99th=[10159] 00:09:46.134 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:46.134 slat (usec): min=20, max=125, avg=40.99, stdev= 9.30 00:09:46.134 clat (usec): min=111, max=13603, avg=284.00, stdev=723.53 00:09:46.134 lat (usec): min=144, max=13640, avg=325.00, stdev=723.79 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 128], 5.00th=[ 139], 10.00th=[ 153], 20.00th=[ 172], 00:09:46.134 | 30.00th=[ 184], 40.00th=[ 202], 50.00th=[ 227], 60.00th=[ 255], 00:09:46.134 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 355], 00:09:46.134 | 99.00th=[ 562], 99.50th=[ 2802], 99.90th=[12256], 99.95th=[13566], 00:09:46.134 | 99.99th=[13566] 00:09:46.134 bw ( KiB/s): min= 4096, max= 4096, per=14.40%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.134 lat (usec) : 250=39.12%, 500=53.21%, 750=5.96%, 1000=0.51% 00:09:46.134 lat (msec) : 2=0.19%, 4=0.39%, 10=0.39%, 20=0.23% 00:09:46.134 cpu : usr=1.30%, sys=8.10%, ctx=2569, majf=0, minf=11 00:09:46.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.134 job1: (groupid=0, jobs=1): err= 0: pid=68400: Mon Jul 15 07:13:54 2024 00:09:46.134 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:46.134 slat (nsec): min=12279, max=88249, avg=24882.66, stdev=6895.18 00:09:46.134 clat (usec): min=140, max=1771, avg=234.25, stdev=52.51 00:09:46.134 lat (usec): min=154, max=1797, avg=259.13, stdev=53.69 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 153], 5.00th=[ 174], 10.00th=[ 194], 20.00th=[ 215], 00:09:46.134 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:09:46.134 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:09:46.134 | 99.00th=[ 343], 99.50th=[ 396], 99.90th=[ 668], 99.95th=[ 1221], 00:09:46.134 | 99.99th=[ 1778] 00:09:46.134 write: IOPS=2226, BW=8907KiB/s (9121kB/s)(8916KiB/1001msec); 0 zone resets 00:09:46.134 slat (usec): min=15, max=132, avg=33.70, stdev= 8.81 00:09:46.134 clat (usec): min=99, max=2413, avg=171.31, stdev=60.32 00:09:46.134 lat (usec): min=128, max=2455, avg=205.02, stdev=62.68 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 111], 5.00th=[ 130], 10.00th=[ 139], 20.00th=[ 147], 00:09:46.134 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:09:46.134 | 70.00th=[ 174], 80.00th=[ 186], 90.00th=[ 217], 95.00th=[ 258], 00:09:46.134 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 347], 99.95th=[ 359], 00:09:46.134 | 99.99th=[ 2409] 00:09:46.134 bw ( KiB/s): min= 8768, max= 8768, per=30.83%, avg=8768.00, stdev= 0.00, samples=1 00:09:46.134 iops : min= 2192, max= 2192, avg=2192.00, stdev= 0.00, samples=1 00:09:46.134 lat (usec) : 100=0.02%, 250=85.74%, 500=14.15%, 750=0.02% 00:09:46.134 lat (msec) : 2=0.05%, 4=0.02% 00:09:46.134 cpu : usr=2.10%, sys=10.40%, ctx=4285, majf=0, minf=7 00:09:46.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 issued rwts: total=2048,2229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.134 job2: (groupid=0, jobs=1): err= 0: pid=68402: Mon Jul 15 07:13:54 2024 00:09:46.134 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:46.134 slat (usec): min=15, max=312, avg=29.98, stdev=11.39 00:09:46.134 clat (usec): min=179, max=14305, avg=461.66, stdev=679.78 00:09:46.134 lat (usec): min=205, max=14336, avg=491.64, stdev=680.15 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 225], 5.00th=[ 253], 10.00th=[ 273], 20.00th=[ 306], 00:09:46.134 | 30.00th=[ 351], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 412], 00:09:46.134 | 70.00th=[ 441], 80.00th=[ 490], 90.00th=[ 570], 95.00th=[ 635], 00:09:46.134 | 99.00th=[ 1045], 99.50th=[ 4359], 99.90th=[10159], 99.95th=[14353], 00:09:46.134 | 99.99th=[14353] 00:09:46.134 write: IOPS=1151, BW=4607KiB/s (4718kB/s)(4612KiB/1001msec); 0 zone resets 00:09:46.134 slat (usec): min=20, max=347, avg=43.66, stdev=14.73 00:09:46.134 clat (usec): min=106, max=30560, avg=379.61, stdev=1315.06 00:09:46.134 lat (usec): min=135, max=30602, avg=423.27, stdev=1315.30 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 149], 5.00th=[ 176], 10.00th=[ 194], 20.00th=[ 223], 00:09:46.134 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 302], 00:09:46.134 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 367], 95.00th=[ 424], 00:09:46.134 | 99.00th=[ 1860], 99.50th=[ 4490], 99.90th=[27657], 99.95th=[30540], 00:09:46.134 | 99.99th=[30540] 00:09:46.134 bw ( KiB/s): min= 4096, max= 4096, per=14.40%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.134 lat (usec) : 250=16.67%, 500=73.36%, 750=8.22%, 1000=0.46% 00:09:46.134 lat (msec) : 2=0.37%, 4=0.28%, 10=0.41%, 20=0.14%, 50=0.09% 00:09:46.134 cpu : usr=1.30%, sys=7.00%, ctx=2177, majf=0, minf=20 00:09:46.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 issued rwts: total=1024,1153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.134 job3: (groupid=0, jobs=1): err= 0: pid=68403: Mon Jul 15 07:13:54 2024 00:09:46.134 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:46.134 slat (nsec): min=13251, max=67469, avg=24512.17, stdev=5971.03 00:09:46.134 clat (usec): min=154, max=4090, avg=241.48, stdev=91.99 00:09:46.134 lat (usec): min=169, max=4119, avg=265.99, stdev=92.63 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 208], 00:09:46.134 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:09:46.134 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:09:46.134 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 351], 99.95th=[ 685], 00:09:46.134 | 99.99th=[ 4080] 00:09:46.134 write: IOPS=2195, BW=8783KiB/s (8994kB/s)(8792KiB/1001msec); 0 zone resets 00:09:46.134 slat (nsec): min=18222, max=99821, avg=34356.73, stdev=8537.01 00:09:46.134 clat (usec): min=104, max=933, avg=167.10, stdev=33.18 00:09:46.134 lat (usec): min=132, max=972, avg=201.46, stdev=34.91 00:09:46.134 clat percentiles (usec): 00:09:46.134 | 1.00th=[ 115], 5.00th=[ 124], 10.00th=[ 131], 20.00th=[ 143], 00:09:46.134 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 174], 00:09:46.134 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 212], 00:09:46.134 | 99.00th=[ 233], 99.50th=[ 245], 99.90th=[ 420], 99.95th=[ 553], 00:09:46.134 | 99.99th=[ 930] 00:09:46.134 bw ( KiB/s): min= 8192, max= 8192, per=28.81%, avg=8192.00, stdev= 0.00, samples=1 00:09:46.134 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:46.134 lat (usec) : 250=79.82%, 500=20.09%, 750=0.05%, 1000=0.02% 00:09:46.134 lat (msec) : 10=0.02% 00:09:46.134 cpu : usr=2.70%, sys=9.70%, ctx=4246, majf=0, minf=7 00:09:46.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.134 issued rwts: total=2048,2198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.134 00:09:46.134 Run status group 0 (all jobs): 00:09:46.134 READ: bw=24.0MiB/s (25.2MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:09:46.134 WRITE: bw=27.8MiB/s (29.1MB/s), 4607KiB/s-8907KiB/s (4718kB/s-9121kB/s), io=27.8MiB (29.1MB), run=1001-1001msec 00:09:46.134 00:09:46.134 Disk stats (read/write): 00:09:46.134 nvme0n1: ios=979/1024, merge=0/0, ticks=488/357, in_queue=845, util=87.68% 00:09:46.134 nvme0n2: ios=1672/2048, merge=0/0, ticks=410/372, in_queue=782, util=86.60% 00:09:46.134 nvme0n3: ios=746/1024, merge=0/0, ticks=405/419, in_queue=824, util=88.52% 00:09:46.134 nvme0n4: ios=1546/2048, merge=0/0, ticks=382/374, in_queue=756, util=89.53% 00:09:46.134 07:13:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:46.134 [global] 00:09:46.134 thread=1 00:09:46.134 invalidate=1 00:09:46.134 rw=write 00:09:46.134 time_based=1 00:09:46.134 runtime=1 00:09:46.134 ioengine=libaio 00:09:46.134 direct=1 00:09:46.134 bs=4096 00:09:46.134 iodepth=128 00:09:46.134 norandommap=0 00:09:46.134 numjobs=1 00:09:46.134 00:09:46.134 verify_dump=1 00:09:46.134 verify_backlog=512 00:09:46.134 verify_state_save=0 00:09:46.134 do_verify=1 00:09:46.134 verify=crc32c-intel 00:09:46.134 [job0] 00:09:46.134 filename=/dev/nvme0n1 00:09:46.134 [job1] 00:09:46.134 filename=/dev/nvme0n2 00:09:46.134 [job2] 00:09:46.134 filename=/dev/nvme0n3 00:09:46.134 [job3] 00:09:46.134 filename=/dev/nvme0n4 00:09:46.134 Could not set queue depth (nvme0n1) 00:09:46.134 Could not set queue depth (nvme0n2) 00:09:46.134 Could not set queue depth (nvme0n3) 00:09:46.134 Could not set queue depth (nvme0n4) 00:09:46.134 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.135 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.135 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.135 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.135 fio-3.35 00:09:46.135 Starting 4 threads 00:09:47.525 00:09:47.525 job0: (groupid=0, jobs=1): err= 0: pid=68463: Mon Jul 15 07:13:56 2024 00:09:47.525 read: IOPS=4978, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:09:47.525 slat (usec): min=6, max=3518, avg=97.07, stdev=467.28 00:09:47.525 clat (usec): min=485, max=15400, avg=12837.64, stdev=1642.46 00:09:47.525 lat (usec): min=3543, max=15438, avg=12934.71, stdev=1582.50 00:09:47.525 clat percentiles (usec): 00:09:47.525 | 1.00th=[ 7439], 5.00th=[10814], 10.00th=[11076], 20.00th=[11600], 00:09:47.525 | 30.00th=[11863], 40.00th=[12125], 50.00th=[13173], 60.00th=[13698], 00:09:47.525 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14746], 95.00th=[15008], 00:09:47.525 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15401], 99.95th=[15401], 00:09:47.525 | 99.99th=[15401] 00:09:47.525 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:47.525 slat (usec): min=9, max=3443, avg=93.18, stdev=399.17 00:09:47.525 clat (usec): min=8180, max=14482, avg=12232.68, stdev=1266.07 00:09:47.525 lat (usec): min=9057, max=14507, avg=12325.87, stdev=1211.19 00:09:47.525 clat percentiles (usec): 00:09:47.525 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:09:47.525 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[12780], 00:09:47.525 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:09:47.525 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:09:47.525 | 99.99th=[14484] 00:09:47.525 bw ( KiB/s): min=19840, max=21120, per=35.89%, avg=20480.00, stdev=905.10, samples=2 00:09:47.526 iops : min= 4960, max= 5280, avg=5120.00, stdev=226.27, samples=2 00:09:47.526 lat (usec) : 500=0.01% 00:09:47.526 lat (msec) : 4=0.31%, 10=2.06%, 20=97.63% 00:09:47.526 cpu : usr=3.99%, sys=13.87%, ctx=318, majf=0, minf=11 00:09:47.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:47.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.526 issued rwts: total=4993,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.526 job1: (groupid=0, jobs=1): err= 0: pid=68464: Mon Jul 15 07:13:56 2024 00:09:47.526 read: IOPS=2396, BW=9585KiB/s (9815kB/s)(9604KiB/1002msec) 00:09:47.526 slat (usec): min=4, max=10147, avg=201.31, stdev=1046.17 00:09:47.526 clat (usec): min=1001, max=33757, avg=25546.23, stdev=3958.78 00:09:47.526 lat (usec): min=6740, max=33768, avg=25747.54, stdev=3829.61 00:09:47.526 clat percentiles (usec): 00:09:47.526 | 1.00th=[ 7242], 5.00th=[19530], 10.00th=[23462], 20.00th=[23987], 00:09:47.526 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25560], 00:09:47.526 | 70.00th=[26346], 80.00th=[28443], 90.00th=[29754], 95.00th=[33162], 00:09:47.526 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:09:47.526 | 99.99th=[33817] 00:09:47.526 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:09:47.526 slat (usec): min=9, max=8458, avg=194.08, stdev=960.27 00:09:47.526 clat (usec): min=17142, max=32838, avg=25220.03, stdev=2430.66 00:09:47.526 lat (usec): min=19935, max=33246, avg=25414.10, stdev=2255.22 00:09:47.526 clat percentiles (usec): 00:09:47.526 | 1.00th=[18744], 5.00th=[22414], 10.00th=[22938], 20.00th=[23462], 00:09:47.526 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24511], 60.00th=[25560], 00:09:47.526 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28443], 95.00th=[29492], 00:09:47.526 | 99.00th=[32637], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:09:47.526 | 99.99th=[32900] 00:09:47.526 bw ( KiB/s): min= 9736, max=10744, per=17.95%, avg=10240.00, stdev=712.76, samples=2 00:09:47.526 iops : min= 2434, max= 2686, avg=2560.00, stdev=178.19, samples=2 00:09:47.526 lat (msec) : 2=0.02%, 10=0.65%, 20=3.12%, 50=96.21% 00:09:47.526 cpu : usr=2.80%, sys=6.39%, ctx=157, majf=0, minf=17 00:09:47.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:47.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.526 issued rwts: total=2401,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.526 job2: (groupid=0, jobs=1): err= 0: pid=68465: Mon Jul 15 07:13:56 2024 00:09:47.526 read: IOPS=2428, BW=9713KiB/s (9946kB/s)(9732KiB/1002msec) 00:09:47.526 slat (usec): min=6, max=12170, avg=202.68, stdev=966.30 00:09:47.526 clat (usec): min=400, max=37395, avg=25046.95, stdev=4527.86 00:09:47.526 lat (usec): min=3766, max=37403, avg=25249.63, stdev=4458.14 00:09:47.526 clat percentiles (usec): 00:09:47.526 | 1.00th=[ 4293], 5.00th=[19530], 10.00th=[21627], 20.00th=[23462], 00:09:47.526 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[25560], 00:09:47.526 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29754], 95.00th=[33817], 00:09:47.526 | 99.00th=[35914], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:09:47.526 | 99.99th=[37487] 00:09:47.526 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:09:47.526 slat (usec): min=12, max=7684, avg=190.40, stdev=932.10 00:09:47.526 clat (usec): min=16420, max=31487, avg=25379.87, stdev=2450.41 00:09:47.526 lat (usec): min=19557, max=31514, avg=25570.27, stdev=2276.89 00:09:47.526 clat percentiles (usec): 00:09:47.526 | 1.00th=[19006], 5.00th=[22152], 10.00th=[22676], 20.00th=[23462], 00:09:47.526 | 30.00th=[23987], 40.00th=[24249], 50.00th=[25035], 60.00th=[25822], 00:09:47.526 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:09:47.526 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:09:47.526 | 99.99th=[31589] 00:09:47.526 bw ( KiB/s): min= 9888, max=10592, per=17.95%, avg=10240.00, stdev=497.80, samples=2 00:09:47.526 iops : min= 2472, max= 2648, avg=2560.00, stdev=124.45, samples=2 00:09:47.526 lat (usec) : 500=0.02% 00:09:47.526 lat (msec) : 4=0.18%, 10=0.46%, 20=3.02%, 50=96.31% 00:09:47.526 cpu : usr=2.80%, sys=6.49%, ctx=191, majf=0, minf=11 00:09:47.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:47.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.526 issued rwts: total=2433,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.526 job3: (groupid=0, jobs=1): err= 0: pid=68466: Mon Jul 15 07:13:56 2024 00:09:47.526 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:09:47.526 slat (usec): min=4, max=7285, avg=124.28, stdev=671.96 00:09:47.526 clat (usec): min=498, max=23555, avg=15318.92, stdev=2443.22 00:09:47.526 lat (usec): min=5695, max=23609, avg=15443.21, stdev=2489.71 00:09:47.526 clat percentiles (usec): 00:09:47.526 | 1.00th=[ 6456], 5.00th=[10683], 10.00th=[12649], 20.00th=[13698], 00:09:47.526 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15795], 00:09:47.526 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17433], 95.00th=[19006], 00:09:47.526 | 99.00th=[21890], 99.50th=[22414], 99.90th=[23200], 99.95th=[23200], 00:09:47.526 | 99.99th=[23462] 00:09:47.526 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:47.526 slat (usec): min=10, max=6783, avg=112.84, stdev=476.17 00:09:47.526 clat (usec): min=6368, max=23586, avg=15719.50, stdev=1960.51 00:09:47.526 lat (usec): min=6382, max=23607, avg=15832.35, stdev=2007.11 00:09:47.526 clat percentiles (usec): 00:09:47.526 | 1.00th=[ 9765], 5.00th=[12256], 10.00th=[13829], 20.00th=[14615], 00:09:47.526 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15926], 60.00th=[16188], 00:09:47.526 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17433], 95.00th=[18482], 00:09:47.526 | 99.00th=[22152], 99.50th=[22414], 99.90th=[23462], 99.95th=[23462], 00:09:47.526 | 99.99th=[23462] 00:09:47.526 bw ( KiB/s): min=16384, max=16384, per=28.71%, avg=16384.00, stdev= 0.00, samples=2 00:09:47.526 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:47.526 lat (usec) : 500=0.01% 00:09:47.526 lat (msec) : 10=2.32%, 20=93.98%, 50=3.69% 00:09:47.526 cpu : usr=3.78%, sys=12.05%, ctx=508, majf=0, minf=13 00:09:47.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:47.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.526 issued rwts: total=4088,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.526 00:09:47.526 Run status group 0 (all jobs): 00:09:47.526 READ: bw=54.1MiB/s (56.7MB/s), 9585KiB/s-19.4MiB/s (9815kB/s-20.4MB/s), io=54.4MiB (57.0MB), run=1002-1005msec 00:09:47.526 WRITE: bw=55.7MiB/s (58.4MB/s), 9.98MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=56.0MiB (58.7MB), run=1002-1005msec 00:09:47.526 00:09:47.526 Disk stats (read/write): 00:09:47.526 nvme0n1: ios=4273/4608, merge=0/0, ticks=12148/11806, in_queue=23954, util=90.17% 00:09:47.526 nvme0n2: ios=2083/2176, merge=0/0, ticks=12705/12918, in_queue=25623, util=88.55% 00:09:47.526 nvme0n3: ios=2078/2208, merge=0/0, ticks=13252/12703, in_queue=25955, util=90.62% 00:09:47.526 nvme0n4: ios=3409/3584, merge=0/0, ticks=25714/25024, in_queue=50738, util=90.57% 00:09:47.526 07:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:47.526 [global] 00:09:47.526 thread=1 00:09:47.526 invalidate=1 00:09:47.526 rw=randwrite 00:09:47.526 time_based=1 00:09:47.526 runtime=1 00:09:47.526 ioengine=libaio 00:09:47.526 direct=1 00:09:47.527 bs=4096 00:09:47.527 iodepth=128 00:09:47.527 norandommap=0 00:09:47.527 numjobs=1 00:09:47.527 00:09:47.527 verify_dump=1 00:09:47.527 verify_backlog=512 00:09:47.527 verify_state_save=0 00:09:47.527 do_verify=1 00:09:47.527 verify=crc32c-intel 00:09:47.527 [job0] 00:09:47.527 filename=/dev/nvme0n1 00:09:47.527 [job1] 00:09:47.527 filename=/dev/nvme0n2 00:09:47.527 [job2] 00:09:47.527 filename=/dev/nvme0n3 00:09:47.527 [job3] 00:09:47.527 filename=/dev/nvme0n4 00:09:47.527 Could not set queue depth (nvme0n1) 00:09:47.527 Could not set queue depth (nvme0n2) 00:09:47.527 Could not set queue depth (nvme0n3) 00:09:47.527 Could not set queue depth (nvme0n4) 00:09:47.527 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.527 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.527 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.527 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.527 fio-3.35 00:09:47.527 Starting 4 threads 00:09:48.904 00:09:48.904 job0: (groupid=0, jobs=1): err= 0: pid=68519: Mon Jul 15 07:13:57 2024 00:09:48.904 read: IOPS=2211, BW=8848KiB/s (9060kB/s)(8892KiB/1005msec) 00:09:48.904 slat (usec): min=3, max=13539, avg=212.75, stdev=917.76 00:09:48.904 clat (usec): min=4563, max=47895, avg=27080.70, stdev=5049.80 00:09:48.904 lat (usec): min=4590, max=47915, avg=27293.45, stdev=5042.43 00:09:48.904 clat percentiles (usec): 00:09:48.904 | 1.00th=[10290], 5.00th=[21365], 10.00th=[22676], 20.00th=[24249], 00:09:48.904 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26608], 60.00th=[27132], 00:09:48.904 | 70.00th=[28181], 80.00th=[30016], 90.00th=[33424], 95.00th=[35390], 00:09:48.904 | 99.00th=[42206], 99.50th=[44827], 99.90th=[47973], 99.95th=[47973], 00:09:48.904 | 99.99th=[47973] 00:09:48.904 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:09:48.904 slat (usec): min=9, max=19847, avg=195.40, stdev=932.67 00:09:48.904 clat (usec): min=13401, max=59450, avg=25819.96, stdev=6834.11 00:09:48.904 lat (usec): min=13435, max=59473, avg=26015.37, stdev=6847.08 00:09:48.904 clat percentiles (usec): 00:09:48.904 | 1.00th=[16712], 5.00th=[18220], 10.00th=[20055], 20.00th=[21627], 00:09:48.904 | 30.00th=[22938], 40.00th=[23987], 50.00th=[24249], 60.00th=[24773], 00:09:48.904 | 70.00th=[25822], 80.00th=[27657], 90.00th=[34866], 95.00th=[43254], 00:09:48.904 | 99.00th=[52691], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:09:48.904 | 99.99th=[59507] 00:09:48.904 bw ( KiB/s): min= 9892, max=10568, per=25.48%, avg=10230.00, stdev=478.00, samples=2 00:09:48.904 iops : min= 2473, max= 2642, avg=2557.50, stdev=119.50, samples=2 00:09:48.904 lat (msec) : 10=0.40%, 20=6.10%, 50=92.52%, 100=0.98% 00:09:48.904 cpu : usr=2.29%, sys=7.57%, ctx=512, majf=0, minf=7 00:09:48.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:48.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.904 issued rwts: total=2223,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.904 job1: (groupid=0, jobs=1): err= 0: pid=68520: Mon Jul 15 07:13:57 2024 00:09:48.904 read: IOPS=2291, BW=9165KiB/s (9385kB/s)(9348KiB/1020msec) 00:09:48.904 slat (usec): min=6, max=21095, avg=224.11, stdev=1270.55 00:09:48.904 clat (usec): min=1011, max=57070, avg=27015.97, stdev=6952.64 00:09:48.904 lat (usec): min=16434, max=57088, avg=27240.08, stdev=6902.94 00:09:48.904 clat percentiles (usec): 00:09:48.904 | 1.00th=[16450], 5.00th=[19268], 10.00th=[20317], 20.00th=[22676], 00:09:48.904 | 30.00th=[24249], 40.00th=[25035], 50.00th=[25297], 60.00th=[26608], 00:09:48.904 | 70.00th=[27657], 80.00th=[28967], 90.00th=[34341], 95.00th=[44303], 00:09:48.904 | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:09:48.904 | 99.99th=[56886] 00:09:48.904 write: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(10.0MiB/1020msec); 0 zone resets 00:09:48.904 slat (usec): min=10, max=11272, avg=185.14, stdev=939.57 00:09:48.904 clat (usec): min=12361, max=50404, avg=25430.47, stdev=5888.23 00:09:48.904 lat (usec): min=17013, max=50441, avg=25615.62, stdev=5824.73 00:09:48.904 clat percentiles (usec): 00:09:48.904 | 1.00th=[16909], 5.00th=[17695], 10.00th=[18744], 20.00th=[21103], 00:09:48.904 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24773], 60.00th=[25297], 00:09:48.904 | 70.00th=[25822], 80.00th=[26870], 90.00th=[33424], 95.00th=[36963], 00:09:48.904 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:09:48.904 | 99.99th=[50594] 00:09:48.904 bw ( KiB/s): min= 9189, max=11272, per=25.48%, avg=10230.50, stdev=1472.90, samples=2 00:09:48.904 iops : min= 2297, max= 2818, avg=2557.50, stdev=368.40, samples=2 00:09:48.904 lat (msec) : 2=0.02%, 20=12.89%, 50=85.83%, 100=1.27% 00:09:48.904 cpu : usr=2.06%, sys=6.97%, ctx=162, majf=0, minf=9 00:09:48.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:48.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.904 issued rwts: total=2337,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.904 job2: (groupid=0, jobs=1): err= 0: pid=68521: Mon Jul 15 07:13:57 2024 00:09:48.904 read: IOPS=2197, BW=8788KiB/s (8999kB/s)(8964KiB/1020msec) 00:09:48.904 slat (usec): min=9, max=18337, avg=224.18, stdev=1238.74 00:09:48.904 clat (usec): min=1417, max=47267, avg=28096.63, stdev=5686.70 00:09:48.904 lat (usec): min=18741, max=47283, avg=28320.81, stdev=5576.51 00:09:48.904 clat percentiles (usec): 00:09:48.904 | 1.00th=[19006], 5.00th=[19792], 10.00th=[20579], 20.00th=[24773], 00:09:48.904 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[27657], 00:09:48.904 | 70.00th=[30540], 80.00th=[32375], 90.00th=[35914], 95.00th=[38536], 00:09:48.905 | 99.00th=[46924], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:09:48.905 | 99.99th=[47449] 00:09:48.905 write: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(10.0MiB/1020msec); 0 zone resets 00:09:48.905 slat (usec): min=10, max=10399, avg=193.56, stdev=970.91 00:09:48.905 clat (usec): min=13837, max=40667, avg=25488.75, stdev=4416.33 00:09:48.905 lat (usec): min=18204, max=40695, avg=25682.31, stdev=4338.53 00:09:48.905 clat percentiles (usec): 00:09:48.905 | 1.00th=[18220], 5.00th=[18744], 10.00th=[20055], 20.00th=[23462], 00:09:48.905 | 30.00th=[23725], 40.00th=[24249], 50.00th=[25035], 60.00th=[25560], 00:09:48.905 | 70.00th=[26084], 80.00th=[27919], 90.00th=[30540], 95.00th=[35390], 00:09:48.905 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:09:48.905 | 99.99th=[40633] 00:09:48.905 bw ( KiB/s): min= 8934, max=11528, per=25.48%, avg=10231.00, stdev=1834.23, samples=2 00:09:48.905 iops : min= 2233, max= 2882, avg=2557.50, stdev=458.91, samples=2 00:09:48.905 lat (msec) : 2=0.02%, 20=7.92%, 50=92.06% 00:09:48.905 cpu : usr=2.36%, sys=6.48%, ctx=152, majf=0, minf=12 00:09:48.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:48.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.905 issued rwts: total=2241,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.905 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.905 job3: (groupid=0, jobs=1): err= 0: pid=68522: Mon Jul 15 07:13:57 2024 00:09:48.905 read: IOPS=2381, BW=9524KiB/s (9753kB/s)(9572KiB/1005msec) 00:09:48.905 slat (usec): min=3, max=22217, avg=231.74, stdev=1005.91 00:09:48.905 clat (usec): min=320, max=60845, avg=28382.79, stdev=8264.16 00:09:48.905 lat (usec): min=4722, max=60862, avg=28614.53, stdev=8280.76 00:09:48.905 clat percentiles (usec): 00:09:48.905 | 1.00th=[ 5145], 5.00th=[20579], 10.00th=[22414], 20.00th=[23987], 00:09:48.905 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:09:48.905 | 70.00th=[28443], 80.00th=[31327], 90.00th=[36439], 95.00th=[44827], 00:09:48.905 | 99.00th=[59507], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:09:48.905 | 99.99th=[61080] 00:09:48.905 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:09:48.905 slat (usec): min=5, max=7894, avg=167.22, stdev=787.03 00:09:48.905 clat (usec): min=14451, max=47999, avg=22925.06, stdev=4187.52 00:09:48.905 lat (usec): min=14483, max=48022, avg=23092.27, stdev=4178.20 00:09:48.905 clat percentiles (usec): 00:09:48.905 | 1.00th=[16581], 5.00th=[17433], 10.00th=[18220], 20.00th=[19006], 00:09:48.905 | 30.00th=[20317], 40.00th=[21627], 50.00th=[23462], 60.00th=[23987], 00:09:48.905 | 70.00th=[24511], 80.00th=[25297], 90.00th=[26608], 95.00th=[29492], 00:09:48.905 | 99.00th=[36963], 99.50th=[40109], 99.90th=[47973], 99.95th=[47973], 00:09:48.905 | 99.99th=[47973] 00:09:48.905 bw ( KiB/s): min= 8303, max=12184, per=25.51%, avg=10243.50, stdev=2744.28, samples=2 00:09:48.905 iops : min= 2075, max= 3046, avg=2560.50, stdev=686.60, samples=2 00:09:48.905 lat (usec) : 500=0.02% 00:09:48.905 lat (msec) : 10=0.65%, 20=15.14%, 50=82.48%, 100=1.72% 00:09:48.905 cpu : usr=2.09%, sys=7.87%, ctx=540, majf=0, minf=17 00:09:48.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:48.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.905 issued rwts: total=2393,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.905 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.905 00:09:48.905 Run status group 0 (all jobs): 00:09:48.905 READ: bw=35.2MiB/s (36.9MB/s), 8788KiB/s-9524KiB/s (8999kB/s-9753kB/s), io=35.9MiB (37.7MB), run=1005-1020msec 00:09:48.905 WRITE: bw=39.2MiB/s (41.1MB/s), 9.80MiB/s-9.95MiB/s (10.3MB/s-10.4MB/s), io=40.0MiB (41.9MB), run=1005-1020msec 00:09:48.905 00:09:48.905 Disk stats (read/write): 00:09:48.905 nvme0n1: ios=2094/2236, merge=0/0, ticks=15277/13809, in_queue=29086, util=89.85% 00:09:48.905 nvme0n2: ios=2070/2368, merge=0/0, ticks=12769/12799, in_queue=25568, util=88.16% 00:09:48.905 nvme0n3: ios=2084/2144, merge=0/0, ticks=13978/11840, in_queue=25818, util=91.86% 00:09:48.905 nvme0n4: ios=2081/2415, merge=0/0, ticks=15673/13576, in_queue=29249, util=91.50% 00:09:48.905 07:13:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:48.905 07:13:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68539 00:09:48.905 07:13:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:48.905 07:13:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:48.905 [global] 00:09:48.905 thread=1 00:09:48.905 invalidate=1 00:09:48.905 rw=read 00:09:48.905 time_based=1 00:09:48.905 runtime=10 00:09:48.905 ioengine=libaio 00:09:48.905 direct=1 00:09:48.905 bs=4096 00:09:48.905 iodepth=1 00:09:48.905 norandommap=1 00:09:48.905 numjobs=1 00:09:48.905 00:09:48.905 [job0] 00:09:48.905 filename=/dev/nvme0n1 00:09:48.905 [job1] 00:09:48.905 filename=/dev/nvme0n2 00:09:48.905 [job2] 00:09:48.905 filename=/dev/nvme0n3 00:09:48.905 [job3] 00:09:48.905 filename=/dev/nvme0n4 00:09:48.905 Could not set queue depth (nvme0n1) 00:09:48.905 Could not set queue depth (nvme0n2) 00:09:48.905 Could not set queue depth (nvme0n3) 00:09:48.905 Could not set queue depth (nvme0n4) 00:09:48.905 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.905 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.905 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.905 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.905 fio-3.35 00:09:48.905 Starting 4 threads 00:09:52.189 07:14:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:52.189 fio: pid=68583, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.189 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21729280, buflen=4096 00:09:52.189 07:14:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:52.756 fio: pid=68582, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.756 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27574272, buflen=4096 00:09:52.756 07:14:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.756 07:14:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:53.014 fio: pid=68580, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.014 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=31768576, buflen=4096 00:09:53.014 07:14:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.014 07:14:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:53.581 fio: pid=68581, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.581 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=49500160, buflen=4096 00:09:53.581 07:14:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.581 07:14:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:53.581 00:09:53.581 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68580: Mon Jul 15 07:14:02 2024 00:09:53.581 read: IOPS=1997, BW=7990KiB/s (8181kB/s)(30.3MiB/3883msec) 00:09:53.581 slat (usec): min=9, max=9687, avg=31.38, stdev=195.25 00:09:53.581 clat (usec): min=97, max=27970, avg=466.02, stdev=1175.98 00:09:53.581 lat (usec): min=150, max=28002, avg=497.40, stdev=1191.93 00:09:53.581 clat percentiles (usec): 00:09:53.581 | 1.00th=[ 155], 5.00th=[ 180], 10.00th=[ 200], 20.00th=[ 215], 00:09:53.581 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 302], 00:09:53.581 | 70.00th=[ 379], 80.00th=[ 437], 90.00th=[ 523], 95.00th=[ 1020], 00:09:53.581 | 99.00th=[ 4948], 99.50th=[ 7701], 99.90th=[19268], 99.95th=[23200], 00:09:53.581 | 99.99th=[27919] 00:09:53.581 bw ( KiB/s): min= 3049, max=11826, per=25.22%, avg=7414.14, stdev=2854.23, samples=7 00:09:53.581 iops : min= 762, max= 2956, avg=1853.43, stdev=713.49, samples=7 00:09:53.581 lat (usec) : 100=0.01%, 250=50.14%, 500=38.09%, 750=5.81%, 1000=0.77% 00:09:53.581 lat (msec) : 2=2.91%, 4=0.97%, 10=0.93%, 20=0.28%, 50=0.06% 00:09:53.581 cpu : usr=1.06%, sys=4.82%, ctx=7778, majf=0, minf=1 00:09:53.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 issued rwts: total=7757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.581 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68581: Mon Jul 15 07:14:02 2024 00:09:53.581 read: IOPS=2786, BW=10.9MiB/s (11.4MB/s)(47.2MiB/4337msec) 00:09:53.581 slat (usec): min=9, max=22260, avg=35.37, stdev=351.38 00:09:53.581 clat (usec): min=5, max=28066, avg=320.35, stdev=882.78 00:09:53.581 lat (usec): min=146, max=28083, avg=355.72, stdev=951.30 00:09:53.581 clat percentiles (usec): 00:09:53.581 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 196], 00:09:53.581 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:09:53.581 | 70.00th=[ 235], 80.00th=[ 253], 90.00th=[ 363], 95.00th=[ 478], 00:09:53.581 | 99.00th=[ 2212], 99.50th=[ 4948], 99.90th=[13173], 99.95th=[20317], 00:09:53.581 | 99.99th=[27395] 00:09:53.581 bw ( KiB/s): min= 3120, max=16280, per=36.19%, avg=10640.00, stdev=5535.53, samples=8 00:09:53.581 iops : min= 780, max= 4070, avg=2659.88, stdev=1383.87, samples=8 00:09:53.581 lat (usec) : 10=0.01%, 250=78.87%, 500=16.66%, 750=1.57%, 1000=0.35% 00:09:53.581 lat (msec) : 2=1.50%, 4=0.41%, 10=0.45%, 20=0.13%, 50=0.06% 00:09:53.581 cpu : usr=1.48%, sys=6.87%, ctx=12116, majf=0, minf=1 00:09:53.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 issued rwts: total=12086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.581 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68582: Mon Jul 15 07:14:02 2024 00:09:53.581 read: IOPS=1944, BW=7776KiB/s (7963kB/s)(26.3MiB/3463msec) 00:09:53.581 slat (usec): min=9, max=10486, avg=32.21, stdev=173.73 00:09:53.581 clat (usec): min=4, max=28041, avg=478.88, stdev=1258.73 00:09:53.581 lat (usec): min=172, max=28076, avg=511.09, stdev=1270.15 00:09:53.581 clat percentiles (usec): 00:09:53.581 | 1.00th=[ 167], 5.00th=[ 192], 10.00th=[ 212], 20.00th=[ 225], 00:09:53.581 | 30.00th=[ 235], 40.00th=[ 247], 50.00th=[ 265], 60.00th=[ 326], 00:09:53.581 | 70.00th=[ 379], 80.00th=[ 437], 90.00th=[ 529], 95.00th=[ 979], 00:09:53.581 | 99.00th=[ 5014], 99.50th=[ 8848], 99.90th=[19268], 99.95th=[24249], 00:09:53.581 | 99.99th=[27919] 00:09:53.581 bw ( KiB/s): min= 3000, max=13056, per=23.53%, avg=6919.50, stdev=3920.06, samples=6 00:09:53.581 iops : min= 750, max= 3264, avg=1729.83, stdev=980.07, samples=6 00:09:53.581 lat (usec) : 10=0.03%, 100=0.01%, 250=42.46%, 500=45.02%, 750=6.83% 00:09:53.581 lat (usec) : 1000=0.71% 00:09:53.581 lat (msec) : 2=2.91%, 4=0.70%, 10=0.89%, 20=0.33%, 50=0.09% 00:09:53.581 cpu : usr=1.33%, sys=4.71%, ctx=6751, majf=0, minf=1 00:09:53.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 issued rwts: total=6733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.581 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68583: Mon Jul 15 07:14:02 2024 00:09:53.581 read: IOPS=1744, BW=6978KiB/s (7145kB/s)(20.7MiB/3041msec) 00:09:53.581 slat (usec): min=9, max=14281, avg=31.91, stdev=199.29 00:09:53.581 clat (usec): min=4, max=25619, avg=537.13, stdev=1344.22 00:09:53.581 lat (usec): min=170, max=25650, avg=569.04, stdev=1358.47 00:09:53.581 clat percentiles (usec): 00:09:53.581 | 1.00th=[ 184], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 243], 00:09:53.581 | 30.00th=[ 258], 40.00th=[ 285], 50.00th=[ 338], 60.00th=[ 371], 00:09:53.581 | 70.00th=[ 400], 80.00th=[ 441], 90.00th=[ 510], 95.00th=[ 725], 00:09:53.581 | 99.00th=[ 6456], 99.50th=[10683], 99.90th=[19268], 99.95th=[24249], 00:09:53.581 | 99.99th=[25560] 00:09:53.581 bw ( KiB/s): min= 4232, max=11361, per=23.79%, avg=6995.83, stdev=2558.26, samples=6 00:09:53.581 iops : min= 1058, max= 2840, avg=1748.83, stdev=639.42, samples=6 00:09:53.581 lat (usec) : 10=0.06%, 250=25.59%, 500=63.72%, 750=5.73%, 1000=0.75% 00:09:53.581 lat (msec) : 2=1.15%, 4=1.13%, 10=1.32%, 20=0.45%, 50=0.08% 00:09:53.581 cpu : usr=1.22%, sys=4.61%, ctx=5322, majf=0, minf=1 00:09:53.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.581 issued rwts: total=5306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.581 00:09:53.581 Run status group 0 (all jobs): 00:09:53.581 READ: bw=28.7MiB/s (30.1MB/s), 6978KiB/s-10.9MiB/s (7145kB/s-11.4MB/s), io=125MiB (131MB), run=3041-4337msec 00:09:53.581 00:09:53.581 Disk stats (read/write): 00:09:53.581 nvme0n1: ios=7665/0, merge=0/0, ticks=3483/0, in_queue=3483, util=95.10% 00:09:53.581 nvme0n2: ios=10965/0, merge=0/0, ticks=3699/0, in_queue=3699, util=94.72% 00:09:53.581 nvme0n3: ios=6279/0, merge=0/0, ticks=3055/0, in_queue=3055, util=96.24% 00:09:53.581 nvme0n4: ios=4991/0, merge=0/0, ticks=2651/0, in_queue=2651, util=96.51% 00:09:53.840 07:14:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.840 07:14:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:54.408 07:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.408 07:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:54.665 07:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.665 07:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:55.232 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.232 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68539 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.797 nvmf hotplug test: fio failed as expected 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:55.797 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.054 07:14:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.054 rmmod nvme_tcp 00:09:56.054 rmmod nvme_fabrics 00:09:56.054 rmmod nvme_keyring 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68144 ']' 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68144 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68144 ']' 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68144 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68144 00:09:56.311 killing process with pid 68144 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68144' 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68144 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68144 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.311 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.569 07:14:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:56.569 ************************************ 00:09:56.569 END TEST nvmf_fio_target 00:09:56.569 ************************************ 00:09:56.569 00:09:56.569 real 0m22.242s 00:09:56.569 user 1m22.096s 00:09:56.569 sys 0m12.227s 00:09:56.569 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.569 07:14:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.569 07:14:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:56.569 07:14:05 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.569 07:14:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.569 07:14:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.569 07:14:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.569 ************************************ 00:09:56.569 START TEST nvmf_bdevio 00:09:56.569 ************************************ 00:09:56.569 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.569 * Looking for test storage... 00:09:56.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.569 07:14:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.569 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:56.569 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:56.570 Cannot find device "nvmf_tgt_br" 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.570 Cannot find device "nvmf_tgt_br2" 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:56.570 Cannot find device "nvmf_tgt_br" 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:56.570 Cannot find device "nvmf_tgt_br2" 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:56.570 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.828 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:57.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:09:57.086 00:09:57.086 --- 10.0.0.2 ping statistics --- 00:09:57.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.086 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:57.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:09:57.086 00:09:57.086 --- 10.0.0.3 ping statistics --- 00:09:57.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.086 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:09:57.086 00:09:57.086 --- 10.0.0.1 ping statistics --- 00:09:57.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.086 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68863 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68863 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 68863 ']' 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.086 07:14:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.086 [2024-07-15 07:14:05.930649] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:09:57.086 [2024-07-15 07:14:05.931131] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.389 [2024-07-15 07:14:06.092428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.389 [2024-07-15 07:14:06.175110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.389 [2024-07-15 07:14:06.175645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.389 [2024-07-15 07:14:06.176403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.389 [2024-07-15 07:14:06.177068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.389 [2024-07-15 07:14:06.177788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.389 [2024-07-15 07:14:06.177938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.389 [2024-07-15 07:14:06.178018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:57.389 [2024-07-15 07:14:06.178218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.389 [2024-07-15 07:14:06.178200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:57.389 [2024-07-15 07:14:06.214693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.335 [2024-07-15 07:14:07.144689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.335 Malloc0 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.335 [2024-07-15 07:14:07.210917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:58.335 { 00:09:58.335 "params": { 00:09:58.335 "name": "Nvme$subsystem", 00:09:58.335 "trtype": "$TEST_TRANSPORT", 00:09:58.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.335 "adrfam": "ipv4", 00:09:58.335 "trsvcid": "$NVMF_PORT", 00:09:58.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.335 "hdgst": ${hdgst:-false}, 00:09:58.335 "ddgst": ${ddgst:-false} 00:09:58.335 }, 00:09:58.335 "method": "bdev_nvme_attach_controller" 00:09:58.335 } 00:09:58.335 EOF 00:09:58.335 )") 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:58.335 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:58.335 "params": { 00:09:58.335 "name": "Nvme1", 00:09:58.335 "trtype": "tcp", 00:09:58.335 "traddr": "10.0.0.2", 00:09:58.335 "adrfam": "ipv4", 00:09:58.335 "trsvcid": "4420", 00:09:58.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:58.335 "hdgst": false, 00:09:58.335 "ddgst": false 00:09:58.335 }, 00:09:58.335 "method": "bdev_nvme_attach_controller" 00:09:58.335 }' 00:09:58.335 [2024-07-15 07:14:07.264338] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:09:58.335 [2024-07-15 07:14:07.264426] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68906 ] 00:09:58.593 [2024-07-15 07:14:07.399567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:58.593 [2024-07-15 07:14:07.489565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.593 [2024-07-15 07:14:07.489663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.593 [2024-07-15 07:14:07.489640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.593 [2024-07-15 07:14:07.533805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:58.852 I/O targets: 00:09:58.852 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:58.852 00:09:58.852 00:09:58.852 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.852 http://cunit.sourceforge.net/ 00:09:58.852 00:09:58.852 00:09:58.852 Suite: bdevio tests on: Nvme1n1 00:09:58.852 Test: blockdev write read block ...passed 00:09:58.852 Test: blockdev write zeroes read block ...passed 00:09:58.852 Test: blockdev write zeroes read no split ...passed 00:09:58.852 Test: blockdev write zeroes read split ...passed 00:09:58.852 Test: blockdev write zeroes read split partial ...passed 00:09:58.852 Test: blockdev reset ...[2024-07-15 07:14:07.672804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:58.852 [2024-07-15 07:14:07.672990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139c7c0 (9): Bad file descriptor 00:09:58.852 [2024-07-15 07:14:07.694015] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:58.852 passed 00:09:58.852 Test: blockdev write read 8 blocks ...passed 00:09:58.852 Test: blockdev write read size > 128k ...passed 00:09:58.852 Test: blockdev write read invalid size ...passed 00:09:58.852 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.852 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.852 Test: blockdev write read max offset ...passed 00:09:58.852 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.852 Test: blockdev writev readv 8 blocks ...passed 00:09:58.852 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.852 Test: blockdev writev readv block ...passed 00:09:58.852 Test: blockdev writev readv size > 128k ...passed 00:09:58.852 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.852 Test: blockdev comparev and writev ...[2024-07-15 07:14:07.704110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 [2024-07-15 07:14:07.704186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.704221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 [2024-07-15 07:14:07.704238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.704805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 [2024-07-15 07:14:07.704844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.704873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 [2024-07-15 07:14:07.704890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.705343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 [2024-07-15 07:14:07.705635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.705820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 [2024-07-15 07:14:07.705965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.706537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 passed 00:09:58.852 Test: blockdev nvme passthru rw ...passed 00:09:58.852 Test: blockdev nvme passthru vendor specific ...passed 00:09:58.852 Test: blockdev nvme admin passthru ...[2024-07-15 07:14:07.706836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.706882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.852 [2024-07-15 07:14:07.706905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.707915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.852 [2024-07-15 07:14:07.707949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.708111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.852 [2024-07-15 07:14:07.708140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.708284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.852 [2024-07-15 07:14:07.708310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:58.852 [2024-07-15 07:14:07.708447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.852 [2024-07-15 07:14:07.708473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:58.852 passed 00:09:58.852 Test: blockdev copy ...passed 00:09:58.852 00:09:58.852 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.852 suites 1 1 n/a 0 0 00:09:58.852 tests 23 23 23 0 0 00:09:58.852 asserts 152 152 152 0 n/a 00:09:58.852 00:09:58.852 Elapsed time = 0.174 seconds 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.109 07:14:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.109 rmmod nvme_tcp 00:09:59.109 rmmod nvme_fabrics 00:09:59.109 rmmod nvme_keyring 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68863 ']' 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68863 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 68863 ']' 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 68863 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.109 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68863 00:09:59.110 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:59.110 killing process with pid 68863 00:09:59.110 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:59.110 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68863' 00:09:59.110 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 68863 00:09:59.110 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 68863 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:59.367 00:09:59.367 real 0m2.941s 00:09:59.367 user 0m9.535s 00:09:59.367 sys 0m0.674s 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.367 ************************************ 00:09:59.367 END TEST nvmf_bdevio 00:09:59.367 ************************************ 00:09:59.367 07:14:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.625 07:14:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:59.625 07:14:08 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:59.625 07:14:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.625 07:14:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.625 07:14:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.625 ************************************ 00:09:59.625 START TEST nvmf_auth_target 00:09:59.625 ************************************ 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:59.625 * Looking for test storage... 00:09:59.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.625 07:14:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:59.626 Cannot find device "nvmf_tgt_br" 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.626 Cannot find device "nvmf_tgt_br2" 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:59.626 Cannot find device "nvmf_tgt_br" 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:59.626 Cannot find device "nvmf_tgt_br2" 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.626 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.884 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.884 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.884 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.884 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.884 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.884 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:59.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:09:59.885 00:09:59.885 --- 10.0.0.2 ping statistics --- 00:09:59.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.885 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:59.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:09:59.885 00:09:59.885 --- 10.0.0.3 ping statistics --- 00:09:59.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.885 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:59.885 00:09:59.885 --- 10.0.0.1 ping statistics --- 00:09:59.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.885 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69074 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69074 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69074 ']' 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.885 07:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69093 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b0f3be5c395fc968d23072805e326e949b05c1152618d4b5 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OtN 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b0f3be5c395fc968d23072805e326e949b05c1152618d4b5 0 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b0f3be5c395fc968d23072805e326e949b05c1152618d4b5 0 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b0f3be5c395fc968d23072805e326e949b05c1152618d4b5 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OtN 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OtN 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.OtN 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c5b70035c399a8ee14f64e958d813ec5bdb35a9461ec7fd16b9c36a9b9fc9aa1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YJw 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c5b70035c399a8ee14f64e958d813ec5bdb35a9461ec7fd16b9c36a9b9fc9aa1 3 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c5b70035c399a8ee14f64e958d813ec5bdb35a9461ec7fd16b9c36a9b9fc9aa1 3 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c5b70035c399a8ee14f64e958d813ec5bdb35a9461ec7fd16b9c36a9b9fc9aa1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YJw 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YJw 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.YJw 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1e09fa226ead1c90823fe670f613bf56 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VHW 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1e09fa226ead1c90823fe670f613bf56 1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1e09fa226ead1c90823fe670f613bf56 1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1e09fa226ead1c90823fe670f613bf56 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VHW 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VHW 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.VHW 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:00.451 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=16806c848257c7437d5e7e28d1d342149b33d5aaac88786e 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fxl 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 16806c848257c7437d5e7e28d1d342149b33d5aaac88786e 2 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 16806c848257c7437d5e7e28d1d342149b33d5aaac88786e 2 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=16806c848257c7437d5e7e28d1d342149b33d5aaac88786e 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:00.452 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fxl 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fxl 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.fxl 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1a7021ac1c831e5c6af305493d9d061d85ed58a8db8896a6 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Exp 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1a7021ac1c831e5c6af305493d9d061d85ed58a8db8896a6 2 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1a7021ac1c831e5c6af305493d9d061d85ed58a8db8896a6 2 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1a7021ac1c831e5c6af305493d9d061d85ed58a8db8896a6 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Exp 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Exp 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Exp 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6a3c48ea230a6f79c4f6292f28c103a0 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pZS 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6a3c48ea230a6f79c4f6292f28c103a0 1 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6a3c48ea230a6f79c4f6292f28c103a0 1 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6a3c48ea230a6f79c4f6292f28c103a0 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pZS 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pZS 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.pZS 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2679aca0f5e55aab72e526b7f3b8875ddfb30570ac5d512e60c342d85dc8b06f 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MIs 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2679aca0f5e55aab72e526b7f3b8875ddfb30570ac5d512e60c342d85dc8b06f 3 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2679aca0f5e55aab72e526b7f3b8875ddfb30570ac5d512e60c342d85dc8b06f 3 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2679aca0f5e55aab72e526b7f3b8875ddfb30570ac5d512e60c342d85dc8b06f 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MIs 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MIs 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.MIs 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69074 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69074 ']' 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.710 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69093 /var/tmp/host.sock 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69093 ']' 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.277 07:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OtN 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.OtN 00:10:01.536 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.OtN 00:10:01.794 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.YJw ]] 00:10:01.794 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YJw 00:10:01.794 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.794 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.794 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.794 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YJw 00:10:01.794 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YJw 00:10:02.053 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:02.053 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VHW 00:10:02.053 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.053 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.053 07:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.053 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VHW 00:10:02.053 07:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VHW 00:10:02.311 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.fxl ]] 00:10:02.311 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fxl 00:10:02.311 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.311 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.311 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.311 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fxl 00:10:02.311 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fxl 00:10:02.570 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:02.570 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Exp 00:10:02.570 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.570 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.570 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.570 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Exp 00:10:02.570 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Exp 00:10:02.828 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.pZS ]] 00:10:02.828 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pZS 00:10:02.828 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.828 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.828 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.828 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pZS 00:10:02.828 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pZS 00:10:03.086 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:03.086 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MIs 00:10:03.086 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.086 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.086 07:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.086 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.MIs 00:10:03.086 07:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.MIs 00:10:03.345 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:03.345 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:03.345 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:03.345 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:03.345 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:03.345 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.603 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.862 00:10:03.862 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:03.862 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:03.862 07:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.121 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.122 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.122 07:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.122 07:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.122 07:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.122 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:04.122 { 00:10:04.122 "cntlid": 1, 00:10:04.122 "qid": 0, 00:10:04.122 "state": "enabled", 00:10:04.122 "thread": "nvmf_tgt_poll_group_000", 00:10:04.122 "listen_address": { 00:10:04.122 "trtype": "TCP", 00:10:04.122 "adrfam": "IPv4", 00:10:04.122 "traddr": "10.0.0.2", 00:10:04.122 "trsvcid": "4420" 00:10:04.122 }, 00:10:04.122 "peer_address": { 00:10:04.122 "trtype": "TCP", 00:10:04.122 "adrfam": "IPv4", 00:10:04.122 "traddr": "10.0.0.1", 00:10:04.122 "trsvcid": "45830" 00:10:04.122 }, 00:10:04.122 "auth": { 00:10:04.122 "state": "completed", 00:10:04.122 "digest": "sha256", 00:10:04.122 "dhgroup": "null" 00:10:04.122 } 00:10:04.122 } 00:10:04.122 ]' 00:10:04.122 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:04.389 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.389 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:04.389 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:04.389 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:04.389 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.389 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.389 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.647 07:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.954 07:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.212 00:10:10.212 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:10.212 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.212 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:10.470 { 00:10:10.470 "cntlid": 3, 00:10:10.470 "qid": 0, 00:10:10.470 "state": "enabled", 00:10:10.470 "thread": "nvmf_tgt_poll_group_000", 00:10:10.470 "listen_address": { 00:10:10.470 "trtype": "TCP", 00:10:10.470 "adrfam": "IPv4", 00:10:10.470 "traddr": "10.0.0.2", 00:10:10.470 "trsvcid": "4420" 00:10:10.470 }, 00:10:10.470 "peer_address": { 00:10:10.470 "trtype": "TCP", 00:10:10.470 "adrfam": "IPv4", 00:10:10.470 "traddr": "10.0.0.1", 00:10:10.470 "trsvcid": "60688" 00:10:10.470 }, 00:10:10.470 "auth": { 00:10:10.470 "state": "completed", 00:10:10.470 "digest": "sha256", 00:10:10.470 "dhgroup": "null" 00:10:10.470 } 00:10:10.470 } 00:10:10.470 ]' 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:10.470 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.727 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:10.727 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:10.727 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:10.727 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.727 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.727 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.986 07:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:11.918 07:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.177 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.435 00:10:12.435 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:12.435 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.435 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.694 { 00:10:12.694 "cntlid": 5, 00:10:12.694 "qid": 0, 00:10:12.694 "state": "enabled", 00:10:12.694 "thread": "nvmf_tgt_poll_group_000", 00:10:12.694 "listen_address": { 00:10:12.694 "trtype": "TCP", 00:10:12.694 "adrfam": "IPv4", 00:10:12.694 "traddr": "10.0.0.2", 00:10:12.694 "trsvcid": "4420" 00:10:12.694 }, 00:10:12.694 "peer_address": { 00:10:12.694 "trtype": "TCP", 00:10:12.694 "adrfam": "IPv4", 00:10:12.694 "traddr": "10.0.0.1", 00:10:12.694 "trsvcid": "60708" 00:10:12.694 }, 00:10:12.694 "auth": { 00:10:12.694 "state": "completed", 00:10:12.694 "digest": "sha256", 00:10:12.694 "dhgroup": "null" 00:10:12.694 } 00:10:12.694 } 00:10:12.694 ]' 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.694 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.951 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:12.951 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.951 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.951 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.951 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.209 07:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:10:14.143 07:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.144 07:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:14.144 07:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.144 07:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.144 07:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.144 07:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:14.144 07:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:14.144 07:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:14.144 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:14.716 00:10:14.717 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:14.717 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.717 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:14.975 { 00:10:14.975 "cntlid": 7, 00:10:14.975 "qid": 0, 00:10:14.975 "state": "enabled", 00:10:14.975 "thread": "nvmf_tgt_poll_group_000", 00:10:14.975 "listen_address": { 00:10:14.975 "trtype": "TCP", 00:10:14.975 "adrfam": "IPv4", 00:10:14.975 "traddr": "10.0.0.2", 00:10:14.975 "trsvcid": "4420" 00:10:14.975 }, 00:10:14.975 "peer_address": { 00:10:14.975 "trtype": "TCP", 00:10:14.975 "adrfam": "IPv4", 00:10:14.975 "traddr": "10.0.0.1", 00:10:14.975 "trsvcid": "60742" 00:10:14.975 }, 00:10:14.975 "auth": { 00:10:14.975 "state": "completed", 00:10:14.975 "digest": "sha256", 00:10:14.975 "dhgroup": "null" 00:10:14.975 } 00:10:14.975 } 00:10:14.975 ]' 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.975 07:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.233 07:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:10:16.167 07:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.167 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.425 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.684 00:10:16.942 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:16.942 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:16.942 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:17.200 { 00:10:17.200 "cntlid": 9, 00:10:17.200 "qid": 0, 00:10:17.200 "state": "enabled", 00:10:17.200 "thread": "nvmf_tgt_poll_group_000", 00:10:17.200 "listen_address": { 00:10:17.200 "trtype": "TCP", 00:10:17.200 "adrfam": "IPv4", 00:10:17.200 "traddr": "10.0.0.2", 00:10:17.200 "trsvcid": "4420" 00:10:17.200 }, 00:10:17.200 "peer_address": { 00:10:17.200 "trtype": "TCP", 00:10:17.200 "adrfam": "IPv4", 00:10:17.200 "traddr": "10.0.0.1", 00:10:17.200 "trsvcid": "60780" 00:10:17.200 }, 00:10:17.200 "auth": { 00:10:17.200 "state": "completed", 00:10:17.200 "digest": "sha256", 00:10:17.200 "dhgroup": "ffdhe2048" 00:10:17.200 } 00:10:17.200 } 00:10:17.200 ]' 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.200 07:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:17.200 07:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:17.200 07:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:17.200 07:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.200 07:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.200 07:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.458 07:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:18.392 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.650 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.908 00:10:18.908 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.908 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.908 07:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.166 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.166 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.166 07:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.166 07:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.166 07:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.166 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:19.166 { 00:10:19.166 "cntlid": 11, 00:10:19.166 "qid": 0, 00:10:19.166 "state": "enabled", 00:10:19.166 "thread": "nvmf_tgt_poll_group_000", 00:10:19.166 "listen_address": { 00:10:19.166 "trtype": "TCP", 00:10:19.166 "adrfam": "IPv4", 00:10:19.166 "traddr": "10.0.0.2", 00:10:19.166 "trsvcid": "4420" 00:10:19.166 }, 00:10:19.166 "peer_address": { 00:10:19.166 "trtype": "TCP", 00:10:19.166 "adrfam": "IPv4", 00:10:19.166 "traddr": "10.0.0.1", 00:10:19.166 "trsvcid": "60812" 00:10:19.166 }, 00:10:19.166 "auth": { 00:10:19.166 "state": "completed", 00:10:19.166 "digest": "sha256", 00:10:19.166 "dhgroup": "ffdhe2048" 00:10:19.166 } 00:10:19.166 } 00:10:19.166 ]' 00:10:19.166 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:19.424 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.424 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:19.424 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:19.424 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:19.424 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.424 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.424 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.683 07:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.617 07:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.186 00:10:21.186 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:21.186 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.186 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:21.443 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.443 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.443 07:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.443 07:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.443 07:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.443 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:21.443 { 00:10:21.443 "cntlid": 13, 00:10:21.443 "qid": 0, 00:10:21.443 "state": "enabled", 00:10:21.443 "thread": "nvmf_tgt_poll_group_000", 00:10:21.443 "listen_address": { 00:10:21.443 "trtype": "TCP", 00:10:21.443 "adrfam": "IPv4", 00:10:21.443 "traddr": "10.0.0.2", 00:10:21.443 "trsvcid": "4420" 00:10:21.443 }, 00:10:21.443 "peer_address": { 00:10:21.443 "trtype": "TCP", 00:10:21.443 "adrfam": "IPv4", 00:10:21.443 "traddr": "10.0.0.1", 00:10:21.443 "trsvcid": "47670" 00:10:21.443 }, 00:10:21.443 "auth": { 00:10:21.443 "state": "completed", 00:10:21.443 "digest": "sha256", 00:10:21.443 "dhgroup": "ffdhe2048" 00:10:21.443 } 00:10:21.443 } 00:10:21.443 ]' 00:10:21.443 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:21.700 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.700 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:21.700 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:21.700 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:21.700 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.700 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.700 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.957 07:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:22.888 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:23.145 07:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:23.402 00:10:23.402 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:23.402 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:23.402 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.659 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.659 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.659 07:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.659 07:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.659 07:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.659 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:23.659 { 00:10:23.659 "cntlid": 15, 00:10:23.659 "qid": 0, 00:10:23.659 "state": "enabled", 00:10:23.659 "thread": "nvmf_tgt_poll_group_000", 00:10:23.659 "listen_address": { 00:10:23.659 "trtype": "TCP", 00:10:23.659 "adrfam": "IPv4", 00:10:23.659 "traddr": "10.0.0.2", 00:10:23.659 "trsvcid": "4420" 00:10:23.659 }, 00:10:23.659 "peer_address": { 00:10:23.659 "trtype": "TCP", 00:10:23.659 "adrfam": "IPv4", 00:10:23.659 "traddr": "10.0.0.1", 00:10:23.659 "trsvcid": "47694" 00:10:23.659 }, 00:10:23.659 "auth": { 00:10:23.659 "state": "completed", 00:10:23.659 "digest": "sha256", 00:10:23.659 "dhgroup": "ffdhe2048" 00:10:23.659 } 00:10:23.659 } 00:10:23.659 ]' 00:10:23.659 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:23.916 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.916 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:23.916 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:23.916 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:23.916 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.916 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.916 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.174 07:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.738 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.995 07:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.561 00:10:25.561 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:25.561 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:25.561 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:25.819 { 00:10:25.819 "cntlid": 17, 00:10:25.819 "qid": 0, 00:10:25.819 "state": "enabled", 00:10:25.819 "thread": "nvmf_tgt_poll_group_000", 00:10:25.819 "listen_address": { 00:10:25.819 "trtype": "TCP", 00:10:25.819 "adrfam": "IPv4", 00:10:25.819 "traddr": "10.0.0.2", 00:10:25.819 "trsvcid": "4420" 00:10:25.819 }, 00:10:25.819 "peer_address": { 00:10:25.819 "trtype": "TCP", 00:10:25.819 "adrfam": "IPv4", 00:10:25.819 "traddr": "10.0.0.1", 00:10:25.819 "trsvcid": "47724" 00:10:25.819 }, 00:10:25.819 "auth": { 00:10:25.819 "state": "completed", 00:10:25.819 "digest": "sha256", 00:10:25.819 "dhgroup": "ffdhe3072" 00:10:25.819 } 00:10:25.819 } 00:10:25.819 ]' 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.819 07:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.386 07:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.955 07:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.211 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.807 00:10:27.807 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.807 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.807 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.065 { 00:10:28.065 "cntlid": 19, 00:10:28.065 "qid": 0, 00:10:28.065 "state": "enabled", 00:10:28.065 "thread": "nvmf_tgt_poll_group_000", 00:10:28.065 "listen_address": { 00:10:28.065 "trtype": "TCP", 00:10:28.065 "adrfam": "IPv4", 00:10:28.065 "traddr": "10.0.0.2", 00:10:28.065 "trsvcid": "4420" 00:10:28.065 }, 00:10:28.065 "peer_address": { 00:10:28.065 "trtype": "TCP", 00:10:28.065 "adrfam": "IPv4", 00:10:28.065 "traddr": "10.0.0.1", 00:10:28.065 "trsvcid": "47742" 00:10:28.065 }, 00:10:28.065 "auth": { 00:10:28.065 "state": "completed", 00:10:28.065 "digest": "sha256", 00:10:28.065 "dhgroup": "ffdhe3072" 00:10:28.065 } 00:10:28.065 } 00:10:28.065 ]' 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:28.065 07:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.322 07:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.322 07:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.322 07:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.580 07:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:10:29.146 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.146 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:29.146 07:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.146 07:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.146 07:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.146 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.403 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.404 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.660 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:29.660 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.661 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.918 00:10:29.918 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:29.918 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:29.918 07:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.484 { 00:10:30.484 "cntlid": 21, 00:10:30.484 "qid": 0, 00:10:30.484 "state": "enabled", 00:10:30.484 "thread": "nvmf_tgt_poll_group_000", 00:10:30.484 "listen_address": { 00:10:30.484 "trtype": "TCP", 00:10:30.484 "adrfam": "IPv4", 00:10:30.484 "traddr": "10.0.0.2", 00:10:30.484 "trsvcid": "4420" 00:10:30.484 }, 00:10:30.484 "peer_address": { 00:10:30.484 "trtype": "TCP", 00:10:30.484 "adrfam": "IPv4", 00:10:30.484 "traddr": "10.0.0.1", 00:10:30.484 "trsvcid": "46812" 00:10:30.484 }, 00:10:30.484 "auth": { 00:10:30.484 "state": "completed", 00:10:30.484 "digest": "sha256", 00:10:30.484 "dhgroup": "ffdhe3072" 00:10:30.484 } 00:10:30.484 } 00:10:30.484 ]' 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.484 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.050 07:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:31.615 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:32.179 07:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:32.437 00:10:32.437 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.437 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.437 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.695 { 00:10:32.695 "cntlid": 23, 00:10:32.695 "qid": 0, 00:10:32.695 "state": "enabled", 00:10:32.695 "thread": "nvmf_tgt_poll_group_000", 00:10:32.695 "listen_address": { 00:10:32.695 "trtype": "TCP", 00:10:32.695 "adrfam": "IPv4", 00:10:32.695 "traddr": "10.0.0.2", 00:10:32.695 "trsvcid": "4420" 00:10:32.695 }, 00:10:32.695 "peer_address": { 00:10:32.695 "trtype": "TCP", 00:10:32.695 "adrfam": "IPv4", 00:10:32.695 "traddr": "10.0.0.1", 00:10:32.695 "trsvcid": "46854" 00:10:32.695 }, 00:10:32.695 "auth": { 00:10:32.695 "state": "completed", 00:10:32.695 "digest": "sha256", 00:10:32.695 "dhgroup": "ffdhe3072" 00:10:32.695 } 00:10:32.695 } 00:10:32.695 ]' 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:32.695 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.953 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.953 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.953 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.210 07:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:10:33.776 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.776 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:33.776 07:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.776 07:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.776 07:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.777 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:33.777 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.777 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.777 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.343 07:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.343 07:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.343 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.343 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.601 00:10:34.601 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.601 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:34.601 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.860 { 00:10:34.860 "cntlid": 25, 00:10:34.860 "qid": 0, 00:10:34.860 "state": "enabled", 00:10:34.860 "thread": "nvmf_tgt_poll_group_000", 00:10:34.860 "listen_address": { 00:10:34.860 "trtype": "TCP", 00:10:34.860 "adrfam": "IPv4", 00:10:34.860 "traddr": "10.0.0.2", 00:10:34.860 "trsvcid": "4420" 00:10:34.860 }, 00:10:34.860 "peer_address": { 00:10:34.860 "trtype": "TCP", 00:10:34.860 "adrfam": "IPv4", 00:10:34.860 "traddr": "10.0.0.1", 00:10:34.860 "trsvcid": "46878" 00:10:34.860 }, 00:10:34.860 "auth": { 00:10:34.860 "state": "completed", 00:10:34.860 "digest": "sha256", 00:10:34.860 "dhgroup": "ffdhe4096" 00:10:34.860 } 00:10:34.860 } 00:10:34.860 ]' 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.860 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.118 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:35.118 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.118 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.118 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.118 07:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.376 07:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:36.310 07:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.568 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:36.826 00:10:36.826 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.826 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.826 07:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.392 { 00:10:37.392 "cntlid": 27, 00:10:37.392 "qid": 0, 00:10:37.392 "state": "enabled", 00:10:37.392 "thread": "nvmf_tgt_poll_group_000", 00:10:37.392 "listen_address": { 00:10:37.392 "trtype": "TCP", 00:10:37.392 "adrfam": "IPv4", 00:10:37.392 "traddr": "10.0.0.2", 00:10:37.392 "trsvcid": "4420" 00:10:37.392 }, 00:10:37.392 "peer_address": { 00:10:37.392 "trtype": "TCP", 00:10:37.392 "adrfam": "IPv4", 00:10:37.392 "traddr": "10.0.0.1", 00:10:37.392 "trsvcid": "46906" 00:10:37.392 }, 00:10:37.392 "auth": { 00:10:37.392 "state": "completed", 00:10:37.392 "digest": "sha256", 00:10:37.392 "dhgroup": "ffdhe4096" 00:10:37.392 } 00:10:37.392 } 00:10:37.392 ]' 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.392 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.960 07:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:38.526 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:38.785 07:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.351 00:10:39.351 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.351 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.351 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.609 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.609 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.609 07:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.609 07:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.609 07:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.609 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.609 { 00:10:39.609 "cntlid": 29, 00:10:39.609 "qid": 0, 00:10:39.609 "state": "enabled", 00:10:39.609 "thread": "nvmf_tgt_poll_group_000", 00:10:39.609 "listen_address": { 00:10:39.609 "trtype": "TCP", 00:10:39.609 "adrfam": "IPv4", 00:10:39.609 "traddr": "10.0.0.2", 00:10:39.609 "trsvcid": "4420" 00:10:39.609 }, 00:10:39.609 "peer_address": { 00:10:39.609 "trtype": "TCP", 00:10:39.609 "adrfam": "IPv4", 00:10:39.609 "traddr": "10.0.0.1", 00:10:39.609 "trsvcid": "46936" 00:10:39.609 }, 00:10:39.609 "auth": { 00:10:39.609 "state": "completed", 00:10:39.609 "digest": "sha256", 00:10:39.609 "dhgroup": "ffdhe4096" 00:10:39.609 } 00:10:39.609 } 00:10:39.609 ]' 00:10:39.609 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.867 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.867 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.867 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:39.867 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.867 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.867 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.867 07:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.433 07:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:10:40.998 07:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.998 07:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:40.999 07:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.999 07:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 07:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.999 07:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.999 07:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:40.999 07:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:41.565 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.130 00:10:42.130 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.130 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.130 07:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.388 { 00:10:42.388 "cntlid": 31, 00:10:42.388 "qid": 0, 00:10:42.388 "state": "enabled", 00:10:42.388 "thread": "nvmf_tgt_poll_group_000", 00:10:42.388 "listen_address": { 00:10:42.388 "trtype": "TCP", 00:10:42.388 "adrfam": "IPv4", 00:10:42.388 "traddr": "10.0.0.2", 00:10:42.388 "trsvcid": "4420" 00:10:42.388 }, 00:10:42.388 "peer_address": { 00:10:42.388 "trtype": "TCP", 00:10:42.388 "adrfam": "IPv4", 00:10:42.388 "traddr": "10.0.0.1", 00:10:42.388 "trsvcid": "35162" 00:10:42.388 }, 00:10:42.388 "auth": { 00:10:42.388 "state": "completed", 00:10:42.388 "digest": "sha256", 00:10:42.388 "dhgroup": "ffdhe4096" 00:10:42.388 } 00:10:42.388 } 00:10:42.388 ]' 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.388 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.646 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:42.646 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.646 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.646 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.646 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.903 07:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:10:43.469 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:43.727 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.002 07:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.566 00:10:44.566 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.566 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.566 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.825 { 00:10:44.825 "cntlid": 33, 00:10:44.825 "qid": 0, 00:10:44.825 "state": "enabled", 00:10:44.825 "thread": "nvmf_tgt_poll_group_000", 00:10:44.825 "listen_address": { 00:10:44.825 "trtype": "TCP", 00:10:44.825 "adrfam": "IPv4", 00:10:44.825 "traddr": "10.0.0.2", 00:10:44.825 "trsvcid": "4420" 00:10:44.825 }, 00:10:44.825 "peer_address": { 00:10:44.825 "trtype": "TCP", 00:10:44.825 "adrfam": "IPv4", 00:10:44.825 "traddr": "10.0.0.1", 00:10:44.825 "trsvcid": "35190" 00:10:44.825 }, 00:10:44.825 "auth": { 00:10:44.825 "state": "completed", 00:10:44.825 "digest": "sha256", 00:10:44.825 "dhgroup": "ffdhe6144" 00:10:44.825 } 00:10:44.825 } 00:10:44.825 ]' 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.825 07:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.389 07:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.955 07:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.214 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.780 00:10:46.780 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.780 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.780 07:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.400 { 00:10:47.400 "cntlid": 35, 00:10:47.400 "qid": 0, 00:10:47.400 "state": "enabled", 00:10:47.400 "thread": "nvmf_tgt_poll_group_000", 00:10:47.400 "listen_address": { 00:10:47.400 "trtype": "TCP", 00:10:47.400 "adrfam": "IPv4", 00:10:47.400 "traddr": "10.0.0.2", 00:10:47.400 "trsvcid": "4420" 00:10:47.400 }, 00:10:47.400 "peer_address": { 00:10:47.400 "trtype": "TCP", 00:10:47.400 "adrfam": "IPv4", 00:10:47.400 "traddr": "10.0.0.1", 00:10:47.400 "trsvcid": "35206" 00:10:47.400 }, 00:10:47.400 "auth": { 00:10:47.400 "state": "completed", 00:10:47.400 "digest": "sha256", 00:10:47.400 "dhgroup": "ffdhe6144" 00:10:47.400 } 00:10:47.400 } 00:10:47.400 ]' 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.400 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.985 07:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:10:48.596 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.596 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:48.596 07:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.596 07:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.596 07:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.597 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.597 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.597 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.855 07:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.423 00:10:49.423 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.423 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.423 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.694 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.694 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.695 { 00:10:49.695 "cntlid": 37, 00:10:49.695 "qid": 0, 00:10:49.695 "state": "enabled", 00:10:49.695 "thread": "nvmf_tgt_poll_group_000", 00:10:49.695 "listen_address": { 00:10:49.695 "trtype": "TCP", 00:10:49.695 "adrfam": "IPv4", 00:10:49.695 "traddr": "10.0.0.2", 00:10:49.695 "trsvcid": "4420" 00:10:49.695 }, 00:10:49.695 "peer_address": { 00:10:49.695 "trtype": "TCP", 00:10:49.695 "adrfam": "IPv4", 00:10:49.695 "traddr": "10.0.0.1", 00:10:49.695 "trsvcid": "35238" 00:10:49.695 }, 00:10:49.695 "auth": { 00:10:49.695 "state": "completed", 00:10:49.695 "digest": "sha256", 00:10:49.695 "dhgroup": "ffdhe6144" 00:10:49.695 } 00:10:49.695 } 00:10:49.695 ]' 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.695 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:49.954 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.954 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.954 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.954 07:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.224 07:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:51.158 07:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.417 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.990 00:10:51.990 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.990 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.990 07:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.318 { 00:10:52.318 "cntlid": 39, 00:10:52.318 "qid": 0, 00:10:52.318 "state": "enabled", 00:10:52.318 "thread": "nvmf_tgt_poll_group_000", 00:10:52.318 "listen_address": { 00:10:52.318 "trtype": "TCP", 00:10:52.318 "adrfam": "IPv4", 00:10:52.318 "traddr": "10.0.0.2", 00:10:52.318 "trsvcid": "4420" 00:10:52.318 }, 00:10:52.318 "peer_address": { 00:10:52.318 "trtype": "TCP", 00:10:52.318 "adrfam": "IPv4", 00:10:52.318 "traddr": "10.0.0.1", 00:10:52.318 "trsvcid": "45158" 00:10:52.318 }, 00:10:52.318 "auth": { 00:10:52.318 "state": "completed", 00:10:52.318 "digest": "sha256", 00:10:52.318 "dhgroup": "ffdhe6144" 00:10:52.318 } 00:10:52.318 } 00:10:52.318 ]' 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:52.318 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.577 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.577 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.577 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.835 07:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:10:53.769 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.769 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:53.769 07:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.770 07:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.770 07:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.770 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.770 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.770 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:53.770 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.028 07:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.968 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.968 { 00:10:54.968 "cntlid": 41, 00:10:54.968 "qid": 0, 00:10:54.968 "state": "enabled", 00:10:54.968 "thread": "nvmf_tgt_poll_group_000", 00:10:54.968 "listen_address": { 00:10:54.968 "trtype": "TCP", 00:10:54.968 "adrfam": "IPv4", 00:10:54.968 "traddr": "10.0.0.2", 00:10:54.968 "trsvcid": "4420" 00:10:54.968 }, 00:10:54.968 "peer_address": { 00:10:54.968 "trtype": "TCP", 00:10:54.968 "adrfam": "IPv4", 00:10:54.968 "traddr": "10.0.0.1", 00:10:54.968 "trsvcid": "45190" 00:10:54.968 }, 00:10:54.968 "auth": { 00:10:54.968 "state": "completed", 00:10:54.968 "digest": "sha256", 00:10:54.968 "dhgroup": "ffdhe8192" 00:10:54.968 } 00:10:54.968 } 00:10:54.968 ]' 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.968 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.226 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:55.226 07:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.226 07:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.226 07:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.226 07:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.484 07:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:10:56.049 07:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.307 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.565 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.147 00:10:57.147 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.147 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.147 07:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.406 { 00:10:57.406 "cntlid": 43, 00:10:57.406 "qid": 0, 00:10:57.406 "state": "enabled", 00:10:57.406 "thread": "nvmf_tgt_poll_group_000", 00:10:57.406 "listen_address": { 00:10:57.406 "trtype": "TCP", 00:10:57.406 "adrfam": "IPv4", 00:10:57.406 "traddr": "10.0.0.2", 00:10:57.406 "trsvcid": "4420" 00:10:57.406 }, 00:10:57.406 "peer_address": { 00:10:57.406 "trtype": "TCP", 00:10:57.406 "adrfam": "IPv4", 00:10:57.406 "traddr": "10.0.0.1", 00:10:57.406 "trsvcid": "45218" 00:10:57.406 }, 00:10:57.406 "auth": { 00:10:57.406 "state": "completed", 00:10:57.406 "digest": "sha256", 00:10:57.406 "dhgroup": "ffdhe8192" 00:10:57.406 } 00:10:57.406 } 00:10:57.406 ]' 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:57.406 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.665 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.665 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.665 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.924 07:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:58.490 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.749 07:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.683 00:10:59.683 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.683 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.683 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.941 { 00:10:59.941 "cntlid": 45, 00:10:59.941 "qid": 0, 00:10:59.941 "state": "enabled", 00:10:59.941 "thread": "nvmf_tgt_poll_group_000", 00:10:59.941 "listen_address": { 00:10:59.941 "trtype": "TCP", 00:10:59.941 "adrfam": "IPv4", 00:10:59.941 "traddr": "10.0.0.2", 00:10:59.941 "trsvcid": "4420" 00:10:59.941 }, 00:10:59.941 "peer_address": { 00:10:59.941 "trtype": "TCP", 00:10:59.941 "adrfam": "IPv4", 00:10:59.941 "traddr": "10.0.0.1", 00:10:59.941 "trsvcid": "45254" 00:10:59.941 }, 00:10:59.941 "auth": { 00:10:59.941 "state": "completed", 00:10:59.941 "digest": "sha256", 00:10:59.941 "dhgroup": "ffdhe8192" 00:10:59.941 } 00:10:59.941 } 00:10:59.941 ]' 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.941 07:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.199 07:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:01.133 07:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:01.392 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:01.958 00:11:02.217 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.217 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.217 07:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.217 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.217 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.217 07:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.217 07:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.480 { 00:11:02.480 "cntlid": 47, 00:11:02.480 "qid": 0, 00:11:02.480 "state": "enabled", 00:11:02.480 "thread": "nvmf_tgt_poll_group_000", 00:11:02.480 "listen_address": { 00:11:02.480 "trtype": "TCP", 00:11:02.480 "adrfam": "IPv4", 00:11:02.480 "traddr": "10.0.0.2", 00:11:02.480 "trsvcid": "4420" 00:11:02.480 }, 00:11:02.480 "peer_address": { 00:11:02.480 "trtype": "TCP", 00:11:02.480 "adrfam": "IPv4", 00:11:02.480 "traddr": "10.0.0.1", 00:11:02.480 "trsvcid": "40938" 00:11:02.480 }, 00:11:02.480 "auth": { 00:11:02.480 "state": "completed", 00:11:02.480 "digest": "sha256", 00:11:02.480 "dhgroup": "ffdhe8192" 00:11:02.480 } 00:11:02.480 } 00:11:02.480 ]' 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.480 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.738 07:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:03.674 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.933 07:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.192 00:11:04.192 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.192 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.192 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.462 { 00:11:04.462 "cntlid": 49, 00:11:04.462 "qid": 0, 00:11:04.462 "state": "enabled", 00:11:04.462 "thread": "nvmf_tgt_poll_group_000", 00:11:04.462 "listen_address": { 00:11:04.462 "trtype": "TCP", 00:11:04.462 "adrfam": "IPv4", 00:11:04.462 "traddr": "10.0.0.2", 00:11:04.462 "trsvcid": "4420" 00:11:04.462 }, 00:11:04.462 "peer_address": { 00:11:04.462 "trtype": "TCP", 00:11:04.462 "adrfam": "IPv4", 00:11:04.462 "traddr": "10.0.0.1", 00:11:04.462 "trsvcid": "40976" 00:11:04.462 }, 00:11:04.462 "auth": { 00:11:04.462 "state": "completed", 00:11:04.462 "digest": "sha384", 00:11:04.462 "dhgroup": "null" 00:11:04.462 } 00:11:04.462 } 00:11:04.462 ]' 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.462 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.720 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:04.720 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.720 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.720 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.720 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.978 07:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.913 07:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.480 00:11:06.480 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.480 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.480 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.738 { 00:11:06.738 "cntlid": 51, 00:11:06.738 "qid": 0, 00:11:06.738 "state": "enabled", 00:11:06.738 "thread": "nvmf_tgt_poll_group_000", 00:11:06.738 "listen_address": { 00:11:06.738 "trtype": "TCP", 00:11:06.738 "adrfam": "IPv4", 00:11:06.738 "traddr": "10.0.0.2", 00:11:06.738 "trsvcid": "4420" 00:11:06.738 }, 00:11:06.738 "peer_address": { 00:11:06.738 "trtype": "TCP", 00:11:06.738 "adrfam": "IPv4", 00:11:06.738 "traddr": "10.0.0.1", 00:11:06.738 "trsvcid": "40994" 00:11:06.738 }, 00:11:06.738 "auth": { 00:11:06.738 "state": "completed", 00:11:06.738 "digest": "sha384", 00:11:06.738 "dhgroup": "null" 00:11:06.738 } 00:11:06.738 } 00:11:06.738 ]' 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.738 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.997 07:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.931 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.189 07:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.446 00:11:08.446 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.446 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.446 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.731 { 00:11:08.731 "cntlid": 53, 00:11:08.731 "qid": 0, 00:11:08.731 "state": "enabled", 00:11:08.731 "thread": "nvmf_tgt_poll_group_000", 00:11:08.731 "listen_address": { 00:11:08.731 "trtype": "TCP", 00:11:08.731 "adrfam": "IPv4", 00:11:08.731 "traddr": "10.0.0.2", 00:11:08.731 "trsvcid": "4420" 00:11:08.731 }, 00:11:08.731 "peer_address": { 00:11:08.731 "trtype": "TCP", 00:11:08.731 "adrfam": "IPv4", 00:11:08.731 "traddr": "10.0.0.1", 00:11:08.731 "trsvcid": "41030" 00:11:08.731 }, 00:11:08.731 "auth": { 00:11:08.731 "state": "completed", 00:11:08.731 "digest": "sha384", 00:11:08.731 "dhgroup": "null" 00:11:08.731 } 00:11:08.731 } 00:11:08.731 ]' 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:08.731 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.988 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.988 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.988 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.245 07:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:09.811 07:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.374 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.631 00:11:10.631 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.631 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.631 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.888 { 00:11:10.888 "cntlid": 55, 00:11:10.888 "qid": 0, 00:11:10.888 "state": "enabled", 00:11:10.888 "thread": "nvmf_tgt_poll_group_000", 00:11:10.888 "listen_address": { 00:11:10.888 "trtype": "TCP", 00:11:10.888 "adrfam": "IPv4", 00:11:10.888 "traddr": "10.0.0.2", 00:11:10.888 "trsvcid": "4420" 00:11:10.888 }, 00:11:10.888 "peer_address": { 00:11:10.888 "trtype": "TCP", 00:11:10.888 "adrfam": "IPv4", 00:11:10.888 "traddr": "10.0.0.1", 00:11:10.888 "trsvcid": "34362" 00:11:10.888 }, 00:11:10.888 "auth": { 00:11:10.888 "state": "completed", 00:11:10.888 "digest": "sha384", 00:11:10.888 "dhgroup": "null" 00:11:10.888 } 00:11:10.888 } 00:11:10.888 ]' 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:10.888 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.145 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.145 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.145 07:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.404 07:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:11.971 07:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.229 07:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.487 07:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.487 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.487 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.744 00:11:12.744 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.744 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.744 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.001 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.001 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.001 07:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.001 07:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.001 07:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.001 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.001 { 00:11:13.001 "cntlid": 57, 00:11:13.001 "qid": 0, 00:11:13.001 "state": "enabled", 00:11:13.001 "thread": "nvmf_tgt_poll_group_000", 00:11:13.001 "listen_address": { 00:11:13.001 "trtype": "TCP", 00:11:13.001 "adrfam": "IPv4", 00:11:13.001 "traddr": "10.0.0.2", 00:11:13.001 "trsvcid": "4420" 00:11:13.001 }, 00:11:13.001 "peer_address": { 00:11:13.002 "trtype": "TCP", 00:11:13.002 "adrfam": "IPv4", 00:11:13.002 "traddr": "10.0.0.1", 00:11:13.002 "trsvcid": "34400" 00:11:13.002 }, 00:11:13.002 "auth": { 00:11:13.002 "state": "completed", 00:11:13.002 "digest": "sha384", 00:11:13.002 "dhgroup": "ffdhe2048" 00:11:13.002 } 00:11:13.002 } 00:11:13.002 ]' 00:11:13.002 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.002 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.002 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.261 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.261 07:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.261 07:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.261 07:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.261 07:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.518 07:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.143 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.401 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.402 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.967 00:11:14.967 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.967 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.967 07:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.225 { 00:11:15.225 "cntlid": 59, 00:11:15.225 "qid": 0, 00:11:15.225 "state": "enabled", 00:11:15.225 "thread": "nvmf_tgt_poll_group_000", 00:11:15.225 "listen_address": { 00:11:15.225 "trtype": "TCP", 00:11:15.225 "adrfam": "IPv4", 00:11:15.225 "traddr": "10.0.0.2", 00:11:15.225 "trsvcid": "4420" 00:11:15.225 }, 00:11:15.225 "peer_address": { 00:11:15.225 "trtype": "TCP", 00:11:15.225 "adrfam": "IPv4", 00:11:15.225 "traddr": "10.0.0.1", 00:11:15.225 "trsvcid": "34416" 00:11:15.225 }, 00:11:15.225 "auth": { 00:11:15.225 "state": "completed", 00:11:15.225 "digest": "sha384", 00:11:15.225 "dhgroup": "ffdhe2048" 00:11:15.225 } 00:11:15.225 } 00:11:15.225 ]' 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.225 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.483 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.483 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.483 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.741 07:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:16.307 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.565 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.130 00:11:17.130 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.130 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.130 07:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.388 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.388 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.388 07:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.388 07:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.388 07:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.388 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.388 { 00:11:17.388 "cntlid": 61, 00:11:17.388 "qid": 0, 00:11:17.388 "state": "enabled", 00:11:17.388 "thread": "nvmf_tgt_poll_group_000", 00:11:17.388 "listen_address": { 00:11:17.388 "trtype": "TCP", 00:11:17.388 "adrfam": "IPv4", 00:11:17.388 "traddr": "10.0.0.2", 00:11:17.388 "trsvcid": "4420" 00:11:17.388 }, 00:11:17.388 "peer_address": { 00:11:17.388 "trtype": "TCP", 00:11:17.388 "adrfam": "IPv4", 00:11:17.388 "traddr": "10.0.0.1", 00:11:17.388 "trsvcid": "34444" 00:11:17.388 }, 00:11:17.388 "auth": { 00:11:17.388 "state": "completed", 00:11:17.388 "digest": "sha384", 00:11:17.388 "dhgroup": "ffdhe2048" 00:11:17.388 } 00:11:17.388 } 00:11:17.388 ]' 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.389 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.647 07:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:18.580 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.838 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:19.097 00:11:19.097 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.097 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.097 07:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.354 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.354 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.354 07:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.355 07:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.355 07:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.355 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.355 { 00:11:19.355 "cntlid": 63, 00:11:19.355 "qid": 0, 00:11:19.355 "state": "enabled", 00:11:19.355 "thread": "nvmf_tgt_poll_group_000", 00:11:19.355 "listen_address": { 00:11:19.355 "trtype": "TCP", 00:11:19.355 "adrfam": "IPv4", 00:11:19.355 "traddr": "10.0.0.2", 00:11:19.355 "trsvcid": "4420" 00:11:19.355 }, 00:11:19.355 "peer_address": { 00:11:19.355 "trtype": "TCP", 00:11:19.355 "adrfam": "IPv4", 00:11:19.355 "traddr": "10.0.0.1", 00:11:19.355 "trsvcid": "34468" 00:11:19.355 }, 00:11:19.355 "auth": { 00:11:19.355 "state": "completed", 00:11:19.355 "digest": "sha384", 00:11:19.355 "dhgroup": "ffdhe2048" 00:11:19.355 } 00:11:19.355 } 00:11:19.355 ]' 00:11:19.355 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.613 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.613 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.613 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:19.613 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.613 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.613 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.613 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.871 07:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:20.900 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.901 07:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.466 00:11:21.466 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.466 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.467 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.724 { 00:11:21.724 "cntlid": 65, 00:11:21.724 "qid": 0, 00:11:21.724 "state": "enabled", 00:11:21.724 "thread": "nvmf_tgt_poll_group_000", 00:11:21.724 "listen_address": { 00:11:21.724 "trtype": "TCP", 00:11:21.724 "adrfam": "IPv4", 00:11:21.724 "traddr": "10.0.0.2", 00:11:21.724 "trsvcid": "4420" 00:11:21.724 }, 00:11:21.724 "peer_address": { 00:11:21.724 "trtype": "TCP", 00:11:21.724 "adrfam": "IPv4", 00:11:21.724 "traddr": "10.0.0.1", 00:11:21.724 "trsvcid": "44942" 00:11:21.724 }, 00:11:21.724 "auth": { 00:11:21.724 "state": "completed", 00:11:21.724 "digest": "sha384", 00:11:21.724 "dhgroup": "ffdhe3072" 00:11:21.724 } 00:11:21.724 } 00:11:21.724 ]' 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.724 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.983 07:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.916 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.174 07:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.432 00:11:23.432 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.432 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.432 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.691 { 00:11:23.691 "cntlid": 67, 00:11:23.691 "qid": 0, 00:11:23.691 "state": "enabled", 00:11:23.691 "thread": "nvmf_tgt_poll_group_000", 00:11:23.691 "listen_address": { 00:11:23.691 "trtype": "TCP", 00:11:23.691 "adrfam": "IPv4", 00:11:23.691 "traddr": "10.0.0.2", 00:11:23.691 "trsvcid": "4420" 00:11:23.691 }, 00:11:23.691 "peer_address": { 00:11:23.691 "trtype": "TCP", 00:11:23.691 "adrfam": "IPv4", 00:11:23.691 "traddr": "10.0.0.1", 00:11:23.691 "trsvcid": "44956" 00:11:23.691 }, 00:11:23.691 "auth": { 00:11:23.691 "state": "completed", 00:11:23.691 "digest": "sha384", 00:11:23.691 "dhgroup": "ffdhe3072" 00:11:23.691 } 00:11:23.691 } 00:11:23.691 ]' 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.691 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.949 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.949 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.949 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.949 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.949 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.206 07:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:11:25.139 07:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.140 07:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:25.140 07:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.140 07:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 07:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.140 07:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.140 07:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:25.140 07:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.140 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.706 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.706 { 00:11:25.706 "cntlid": 69, 00:11:25.706 "qid": 0, 00:11:25.706 "state": "enabled", 00:11:25.706 "thread": "nvmf_tgt_poll_group_000", 00:11:25.706 "listen_address": { 00:11:25.706 "trtype": "TCP", 00:11:25.706 "adrfam": "IPv4", 00:11:25.706 "traddr": "10.0.0.2", 00:11:25.706 "trsvcid": "4420" 00:11:25.706 }, 00:11:25.706 "peer_address": { 00:11:25.706 "trtype": "TCP", 00:11:25.706 "adrfam": "IPv4", 00:11:25.706 "traddr": "10.0.0.1", 00:11:25.706 "trsvcid": "44980" 00:11:25.706 }, 00:11:25.706 "auth": { 00:11:25.706 "state": "completed", 00:11:25.706 "digest": "sha384", 00:11:25.706 "dhgroup": "ffdhe3072" 00:11:25.706 } 00:11:25.706 } 00:11:25.706 ]' 00:11:25.706 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.964 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.964 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.964 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:25.964 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.964 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.964 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.964 07:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.274 07:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:26.839 07:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.097 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.664 00:11:27.664 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.664 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.664 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.922 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.922 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.922 07:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.922 07:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.922 07:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.922 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.922 { 00:11:27.922 "cntlid": 71, 00:11:27.922 "qid": 0, 00:11:27.922 "state": "enabled", 00:11:27.922 "thread": "nvmf_tgt_poll_group_000", 00:11:27.922 "listen_address": { 00:11:27.922 "trtype": "TCP", 00:11:27.922 "adrfam": "IPv4", 00:11:27.922 "traddr": "10.0.0.2", 00:11:27.922 "trsvcid": "4420" 00:11:27.922 }, 00:11:27.922 "peer_address": { 00:11:27.922 "trtype": "TCP", 00:11:27.922 "adrfam": "IPv4", 00:11:27.922 "traddr": "10.0.0.1", 00:11:27.922 "trsvcid": "45004" 00:11:27.922 }, 00:11:27.923 "auth": { 00:11:27.923 "state": "completed", 00:11:27.923 "digest": "sha384", 00:11:27.923 "dhgroup": "ffdhe3072" 00:11:27.923 } 00:11:27.923 } 00:11:27.923 ]' 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.923 07:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.181 07:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:29.115 07:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.373 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.630 00:11:29.630 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.630 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.631 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.888 { 00:11:29.888 "cntlid": 73, 00:11:29.888 "qid": 0, 00:11:29.888 "state": "enabled", 00:11:29.888 "thread": "nvmf_tgt_poll_group_000", 00:11:29.888 "listen_address": { 00:11:29.888 "trtype": "TCP", 00:11:29.888 "adrfam": "IPv4", 00:11:29.888 "traddr": "10.0.0.2", 00:11:29.888 "trsvcid": "4420" 00:11:29.888 }, 00:11:29.888 "peer_address": { 00:11:29.888 "trtype": "TCP", 00:11:29.888 "adrfam": "IPv4", 00:11:29.888 "traddr": "10.0.0.1", 00:11:29.888 "trsvcid": "45038" 00:11:29.888 }, 00:11:29.888 "auth": { 00:11:29.888 "state": "completed", 00:11:29.888 "digest": "sha384", 00:11:29.888 "dhgroup": "ffdhe4096" 00:11:29.888 } 00:11:29.888 } 00:11:29.888 ]' 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.888 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.146 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:30.146 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.146 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.146 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.146 07:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.404 07:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.992 07:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.249 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.815 00:11:31.815 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.815 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.815 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.076 { 00:11:32.076 "cntlid": 75, 00:11:32.076 "qid": 0, 00:11:32.076 "state": "enabled", 00:11:32.076 "thread": "nvmf_tgt_poll_group_000", 00:11:32.076 "listen_address": { 00:11:32.076 "trtype": "TCP", 00:11:32.076 "adrfam": "IPv4", 00:11:32.076 "traddr": "10.0.0.2", 00:11:32.076 "trsvcid": "4420" 00:11:32.076 }, 00:11:32.076 "peer_address": { 00:11:32.076 "trtype": "TCP", 00:11:32.076 "adrfam": "IPv4", 00:11:32.076 "traddr": "10.0.0.1", 00:11:32.076 "trsvcid": "42072" 00:11:32.076 }, 00:11:32.076 "auth": { 00:11:32.076 "state": "completed", 00:11:32.076 "digest": "sha384", 00:11:32.076 "dhgroup": "ffdhe4096" 00:11:32.076 } 00:11:32.076 } 00:11:32.076 ]' 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:32.076 07:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.334 07:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.334 07:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.334 07:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.592 07:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:11:33.158 07:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.158 07:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:33.158 07:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.158 07:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.158 07:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.158 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.158 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:33.158 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.417 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.984 00:11:33.984 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.984 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.984 07:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.241 { 00:11:34.241 "cntlid": 77, 00:11:34.241 "qid": 0, 00:11:34.241 "state": "enabled", 00:11:34.241 "thread": "nvmf_tgt_poll_group_000", 00:11:34.241 "listen_address": { 00:11:34.241 "trtype": "TCP", 00:11:34.241 "adrfam": "IPv4", 00:11:34.241 "traddr": "10.0.0.2", 00:11:34.241 "trsvcid": "4420" 00:11:34.241 }, 00:11:34.241 "peer_address": { 00:11:34.241 "trtype": "TCP", 00:11:34.241 "adrfam": "IPv4", 00:11:34.241 "traddr": "10.0.0.1", 00:11:34.241 "trsvcid": "42104" 00:11:34.241 }, 00:11:34.241 "auth": { 00:11:34.241 "state": "completed", 00:11:34.241 "digest": "sha384", 00:11:34.241 "dhgroup": "ffdhe4096" 00:11:34.241 } 00:11:34.241 } 00:11:34.241 ]' 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.241 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.808 07:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:11:35.402 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.402 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:35.402 07:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.403 07:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.403 07:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.403 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.403 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.403 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:35.662 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:35.921 00:11:35.921 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.921 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.921 07:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.180 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.180 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.180 07:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.180 07:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.180 07:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.180 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.180 { 00:11:36.180 "cntlid": 79, 00:11:36.180 "qid": 0, 00:11:36.180 "state": "enabled", 00:11:36.180 "thread": "nvmf_tgt_poll_group_000", 00:11:36.180 "listen_address": { 00:11:36.180 "trtype": "TCP", 00:11:36.180 "adrfam": "IPv4", 00:11:36.180 "traddr": "10.0.0.2", 00:11:36.180 "trsvcid": "4420" 00:11:36.180 }, 00:11:36.180 "peer_address": { 00:11:36.180 "trtype": "TCP", 00:11:36.180 "adrfam": "IPv4", 00:11:36.180 "traddr": "10.0.0.1", 00:11:36.180 "trsvcid": "42128" 00:11:36.180 }, 00:11:36.180 "auth": { 00:11:36.180 "state": "completed", 00:11:36.180 "digest": "sha384", 00:11:36.180 "dhgroup": "ffdhe4096" 00:11:36.180 } 00:11:36.180 } 00:11:36.180 ]' 00:11:36.180 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.438 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.438 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.438 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:36.438 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.438 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.438 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.438 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.697 07:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.666 07:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.230 00:11:38.230 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.230 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.230 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.488 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.488 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.488 07:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.488 07:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.488 07:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.488 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.488 { 00:11:38.488 "cntlid": 81, 00:11:38.488 "qid": 0, 00:11:38.488 "state": "enabled", 00:11:38.488 "thread": "nvmf_tgt_poll_group_000", 00:11:38.488 "listen_address": { 00:11:38.488 "trtype": "TCP", 00:11:38.488 "adrfam": "IPv4", 00:11:38.488 "traddr": "10.0.0.2", 00:11:38.488 "trsvcid": "4420" 00:11:38.488 }, 00:11:38.488 "peer_address": { 00:11:38.488 "trtype": "TCP", 00:11:38.488 "adrfam": "IPv4", 00:11:38.488 "traddr": "10.0.0.1", 00:11:38.488 "trsvcid": "42160" 00:11:38.488 }, 00:11:38.488 "auth": { 00:11:38.488 "state": "completed", 00:11:38.489 "digest": "sha384", 00:11:38.489 "dhgroup": "ffdhe6144" 00:11:38.489 } 00:11:38.489 } 00:11:38.489 ]' 00:11:38.489 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.489 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.489 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.746 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.746 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.746 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.746 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.746 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.005 07:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.938 07:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.939 07:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.939 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.939 07:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.504 00:11:40.504 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.504 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.504 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.763 { 00:11:40.763 "cntlid": 83, 00:11:40.763 "qid": 0, 00:11:40.763 "state": "enabled", 00:11:40.763 "thread": "nvmf_tgt_poll_group_000", 00:11:40.763 "listen_address": { 00:11:40.763 "trtype": "TCP", 00:11:40.763 "adrfam": "IPv4", 00:11:40.763 "traddr": "10.0.0.2", 00:11:40.763 "trsvcid": "4420" 00:11:40.763 }, 00:11:40.763 "peer_address": { 00:11:40.763 "trtype": "TCP", 00:11:40.763 "adrfam": "IPv4", 00:11:40.763 "traddr": "10.0.0.1", 00:11:40.763 "trsvcid": "36538" 00:11:40.763 }, 00:11:40.763 "auth": { 00:11:40.763 "state": "completed", 00:11:40.763 "digest": "sha384", 00:11:40.763 "dhgroup": "ffdhe6144" 00:11:40.763 } 00:11:40.763 } 00:11:40.763 ]' 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.763 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.021 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.021 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.021 07:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.279 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:41.845 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:42.104 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:42.104 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.104 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.104 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:42.105 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:42.105 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.105 07:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.105 07:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.105 07:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.105 07:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.105 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.105 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.672 00:11:42.672 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.672 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.672 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.931 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.931 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.931 07:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.931 07:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.931 07:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.931 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.931 { 00:11:42.931 "cntlid": 85, 00:11:42.931 "qid": 0, 00:11:42.931 "state": "enabled", 00:11:42.931 "thread": "nvmf_tgt_poll_group_000", 00:11:42.931 "listen_address": { 00:11:42.931 "trtype": "TCP", 00:11:42.931 "adrfam": "IPv4", 00:11:42.931 "traddr": "10.0.0.2", 00:11:42.931 "trsvcid": "4420" 00:11:42.931 }, 00:11:42.931 "peer_address": { 00:11:42.931 "trtype": "TCP", 00:11:42.931 "adrfam": "IPv4", 00:11:42.931 "traddr": "10.0.0.1", 00:11:42.931 "trsvcid": "36568" 00:11:42.931 }, 00:11:42.931 "auth": { 00:11:42.931 "state": "completed", 00:11:42.931 "digest": "sha384", 00:11:42.931 "dhgroup": "ffdhe6144" 00:11:42.931 } 00:11:42.931 } 00:11:42.931 ]' 00:11:42.931 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.253 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.254 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.254 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.254 07:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.254 07:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.254 07:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.254 07:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.511 07:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.447 07:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.448 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.448 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:45.013 00:11:45.013 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.013 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.013 07:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.272 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.272 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.272 07:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.272 07:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.272 07:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.272 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.272 { 00:11:45.272 "cntlid": 87, 00:11:45.272 "qid": 0, 00:11:45.272 "state": "enabled", 00:11:45.272 "thread": "nvmf_tgt_poll_group_000", 00:11:45.272 "listen_address": { 00:11:45.272 "trtype": "TCP", 00:11:45.272 "adrfam": "IPv4", 00:11:45.272 "traddr": "10.0.0.2", 00:11:45.272 "trsvcid": "4420" 00:11:45.272 }, 00:11:45.272 "peer_address": { 00:11:45.272 "trtype": "TCP", 00:11:45.272 "adrfam": "IPv4", 00:11:45.272 "traddr": "10.0.0.1", 00:11:45.272 "trsvcid": "36580" 00:11:45.272 }, 00:11:45.272 "auth": { 00:11:45.272 "state": "completed", 00:11:45.272 "digest": "sha384", 00:11:45.272 "dhgroup": "ffdhe6144" 00:11:45.272 } 00:11:45.272 } 00:11:45.272 ]' 00:11:45.272 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.530 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.530 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.530 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:45.530 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.530 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.530 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.530 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.788 07:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.725 07:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.660 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.660 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.660 { 00:11:47.660 "cntlid": 89, 00:11:47.660 "qid": 0, 00:11:47.660 "state": "enabled", 00:11:47.660 "thread": "nvmf_tgt_poll_group_000", 00:11:47.660 "listen_address": { 00:11:47.660 "trtype": "TCP", 00:11:47.660 "adrfam": "IPv4", 00:11:47.660 "traddr": "10.0.0.2", 00:11:47.660 "trsvcid": "4420" 00:11:47.660 }, 00:11:47.660 "peer_address": { 00:11:47.660 "trtype": "TCP", 00:11:47.660 "adrfam": "IPv4", 00:11:47.660 "traddr": "10.0.0.1", 00:11:47.660 "trsvcid": "36612" 00:11:47.661 }, 00:11:47.661 "auth": { 00:11:47.661 "state": "completed", 00:11:47.661 "digest": "sha384", 00:11:47.661 "dhgroup": "ffdhe8192" 00:11:47.661 } 00:11:47.661 } 00:11:47.661 ]' 00:11:47.661 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.918 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.918 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.918 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.918 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.918 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.918 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.918 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.177 07:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.744 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.033 07:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.292 07:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.292 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.292 07:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.860 00:11:49.860 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.860 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.860 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.119 { 00:11:50.119 "cntlid": 91, 00:11:50.119 "qid": 0, 00:11:50.119 "state": "enabled", 00:11:50.119 "thread": "nvmf_tgt_poll_group_000", 00:11:50.119 "listen_address": { 00:11:50.119 "trtype": "TCP", 00:11:50.119 "adrfam": "IPv4", 00:11:50.119 "traddr": "10.0.0.2", 00:11:50.119 "trsvcid": "4420" 00:11:50.119 }, 00:11:50.119 "peer_address": { 00:11:50.119 "trtype": "TCP", 00:11:50.119 "adrfam": "IPv4", 00:11:50.119 "traddr": "10.0.0.1", 00:11:50.119 "trsvcid": "36642" 00:11:50.119 }, 00:11:50.119 "auth": { 00:11:50.119 "state": "completed", 00:11:50.119 "digest": "sha384", 00:11:50.119 "dhgroup": "ffdhe8192" 00:11:50.119 } 00:11:50.119 } 00:11:50.119 ]' 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.119 07:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.377 07:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:51.313 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.571 07:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.138 00:11:52.138 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.138 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.138 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.397 { 00:11:52.397 "cntlid": 93, 00:11:52.397 "qid": 0, 00:11:52.397 "state": "enabled", 00:11:52.397 "thread": "nvmf_tgt_poll_group_000", 00:11:52.397 "listen_address": { 00:11:52.397 "trtype": "TCP", 00:11:52.397 "adrfam": "IPv4", 00:11:52.397 "traddr": "10.0.0.2", 00:11:52.397 "trsvcid": "4420" 00:11:52.397 }, 00:11:52.397 "peer_address": { 00:11:52.397 "trtype": "TCP", 00:11:52.397 "adrfam": "IPv4", 00:11:52.397 "traddr": "10.0.0.1", 00:11:52.397 "trsvcid": "36614" 00:11:52.397 }, 00:11:52.397 "auth": { 00:11:52.397 "state": "completed", 00:11:52.397 "digest": "sha384", 00:11:52.397 "dhgroup": "ffdhe8192" 00:11:52.397 } 00:11:52.397 } 00:11:52.397 ]' 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.397 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.654 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.654 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.654 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.654 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.654 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.911 07:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:53.477 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.736 07:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:54.716 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.716 { 00:11:54.716 "cntlid": 95, 00:11:54.716 "qid": 0, 00:11:54.716 "state": "enabled", 00:11:54.716 "thread": "nvmf_tgt_poll_group_000", 00:11:54.716 "listen_address": { 00:11:54.716 "trtype": "TCP", 00:11:54.716 "adrfam": "IPv4", 00:11:54.716 "traddr": "10.0.0.2", 00:11:54.716 "trsvcid": "4420" 00:11:54.716 }, 00:11:54.716 "peer_address": { 00:11:54.716 "trtype": "TCP", 00:11:54.716 "adrfam": "IPv4", 00:11:54.716 "traddr": "10.0.0.1", 00:11:54.716 "trsvcid": "36632" 00:11:54.716 }, 00:11:54.716 "auth": { 00:11:54.716 "state": "completed", 00:11:54.716 "digest": "sha384", 00:11:54.716 "dhgroup": "ffdhe8192" 00:11:54.716 } 00:11:54.716 } 00:11:54.716 ]' 00:11:54.716 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.973 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.973 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.973 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:54.973 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.973 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.973 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.973 07:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.229 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.160 07:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.160 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.417 00:11:56.675 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.675 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.675 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.933 { 00:11:56.933 "cntlid": 97, 00:11:56.933 "qid": 0, 00:11:56.933 "state": "enabled", 00:11:56.933 "thread": "nvmf_tgt_poll_group_000", 00:11:56.933 "listen_address": { 00:11:56.933 "trtype": "TCP", 00:11:56.933 "adrfam": "IPv4", 00:11:56.933 "traddr": "10.0.0.2", 00:11:56.933 "trsvcid": "4420" 00:11:56.933 }, 00:11:56.933 "peer_address": { 00:11:56.933 "trtype": "TCP", 00:11:56.933 "adrfam": "IPv4", 00:11:56.933 "traddr": "10.0.0.1", 00:11:56.933 "trsvcid": "36648" 00:11:56.933 }, 00:11:56.933 "auth": { 00:11:56.933 "state": "completed", 00:11:56.933 "digest": "sha512", 00:11:56.933 "dhgroup": "null" 00:11:56.933 } 00:11:56.933 } 00:11:56.933 ]' 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.933 07:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.499 07:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.065 07:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.323 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.581 00:11:58.839 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.839 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.839 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.097 { 00:11:59.097 "cntlid": 99, 00:11:59.097 "qid": 0, 00:11:59.097 "state": "enabled", 00:11:59.097 "thread": "nvmf_tgt_poll_group_000", 00:11:59.097 "listen_address": { 00:11:59.097 "trtype": "TCP", 00:11:59.097 "adrfam": "IPv4", 00:11:59.097 "traddr": "10.0.0.2", 00:11:59.097 "trsvcid": "4420" 00:11:59.097 }, 00:11:59.097 "peer_address": { 00:11:59.097 "trtype": "TCP", 00:11:59.097 "adrfam": "IPv4", 00:11:59.097 "traddr": "10.0.0.1", 00:11:59.097 "trsvcid": "36670" 00:11:59.097 }, 00:11:59.097 "auth": { 00:11:59.097 "state": "completed", 00:11:59.097 "digest": "sha512", 00:11:59.097 "dhgroup": "null" 00:11:59.097 } 00:11:59.097 } 00:11:59.097 ]' 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.097 07:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.356 07:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:00.289 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.548 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.809 00:12:00.809 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.809 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.809 07:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.067 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.067 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.067 07:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.067 07:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.326 { 00:12:01.326 "cntlid": 101, 00:12:01.326 "qid": 0, 00:12:01.326 "state": "enabled", 00:12:01.326 "thread": "nvmf_tgt_poll_group_000", 00:12:01.326 "listen_address": { 00:12:01.326 "trtype": "TCP", 00:12:01.326 "adrfam": "IPv4", 00:12:01.326 "traddr": "10.0.0.2", 00:12:01.326 "trsvcid": "4420" 00:12:01.326 }, 00:12:01.326 "peer_address": { 00:12:01.326 "trtype": "TCP", 00:12:01.326 "adrfam": "IPv4", 00:12:01.326 "traddr": "10.0.0.1", 00:12:01.326 "trsvcid": "59372" 00:12:01.326 }, 00:12:01.326 "auth": { 00:12:01.326 "state": "completed", 00:12:01.326 "digest": "sha512", 00:12:01.326 "dhgroup": "null" 00:12:01.326 } 00:12:01.326 } 00:12:01.326 ]' 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.326 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.584 07:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:12:02.519 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.520 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.779 00:12:03.038 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.038 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.038 07:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.297 { 00:12:03.297 "cntlid": 103, 00:12:03.297 "qid": 0, 00:12:03.297 "state": "enabled", 00:12:03.297 "thread": "nvmf_tgt_poll_group_000", 00:12:03.297 "listen_address": { 00:12:03.297 "trtype": "TCP", 00:12:03.297 "adrfam": "IPv4", 00:12:03.297 "traddr": "10.0.0.2", 00:12:03.297 "trsvcid": "4420" 00:12:03.297 }, 00:12:03.297 "peer_address": { 00:12:03.297 "trtype": "TCP", 00:12:03.297 "adrfam": "IPv4", 00:12:03.297 "traddr": "10.0.0.1", 00:12:03.297 "trsvcid": "59386" 00:12:03.297 }, 00:12:03.297 "auth": { 00:12:03.297 "state": "completed", 00:12:03.297 "digest": "sha512", 00:12:03.297 "dhgroup": "null" 00:12:03.297 } 00:12:03.297 } 00:12:03.297 ]' 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.297 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.555 07:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.489 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.746 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.004 00:12:05.004 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.004 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.004 07:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.262 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.262 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.262 07:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.262 07:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.262 07:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.262 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.262 { 00:12:05.262 "cntlid": 105, 00:12:05.262 "qid": 0, 00:12:05.262 "state": "enabled", 00:12:05.262 "thread": "nvmf_tgt_poll_group_000", 00:12:05.262 "listen_address": { 00:12:05.262 "trtype": "TCP", 00:12:05.262 "adrfam": "IPv4", 00:12:05.262 "traddr": "10.0.0.2", 00:12:05.262 "trsvcid": "4420" 00:12:05.262 }, 00:12:05.262 "peer_address": { 00:12:05.262 "trtype": "TCP", 00:12:05.262 "adrfam": "IPv4", 00:12:05.262 "traddr": "10.0.0.1", 00:12:05.262 "trsvcid": "59418" 00:12:05.262 }, 00:12:05.262 "auth": { 00:12:05.262 "state": "completed", 00:12:05.262 "digest": "sha512", 00:12:05.262 "dhgroup": "ffdhe2048" 00:12:05.262 } 00:12:05.262 } 00:12:05.262 ]' 00:12:05.262 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.521 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.521 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.521 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:05.521 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.521 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.521 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.521 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.852 07:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.421 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.679 07:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.937 07:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.937 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.937 07:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.196 00:12:07.196 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.196 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.196 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.454 { 00:12:07.454 "cntlid": 107, 00:12:07.454 "qid": 0, 00:12:07.454 "state": "enabled", 00:12:07.454 "thread": "nvmf_tgt_poll_group_000", 00:12:07.454 "listen_address": { 00:12:07.454 "trtype": "TCP", 00:12:07.454 "adrfam": "IPv4", 00:12:07.454 "traddr": "10.0.0.2", 00:12:07.454 "trsvcid": "4420" 00:12:07.454 }, 00:12:07.454 "peer_address": { 00:12:07.454 "trtype": "TCP", 00:12:07.454 "adrfam": "IPv4", 00:12:07.454 "traddr": "10.0.0.1", 00:12:07.454 "trsvcid": "59446" 00:12:07.454 }, 00:12:07.454 "auth": { 00:12:07.454 "state": "completed", 00:12:07.454 "digest": "sha512", 00:12:07.454 "dhgroup": "ffdhe2048" 00:12:07.454 } 00:12:07.454 } 00:12:07.454 ]' 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.454 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.713 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:07.713 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.713 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.713 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.713 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.972 07:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.907 07:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.474 00:12:09.474 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.474 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.474 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.733 { 00:12:09.733 "cntlid": 109, 00:12:09.733 "qid": 0, 00:12:09.733 "state": "enabled", 00:12:09.733 "thread": "nvmf_tgt_poll_group_000", 00:12:09.733 "listen_address": { 00:12:09.733 "trtype": "TCP", 00:12:09.733 "adrfam": "IPv4", 00:12:09.733 "traddr": "10.0.0.2", 00:12:09.733 "trsvcid": "4420" 00:12:09.733 }, 00:12:09.733 "peer_address": { 00:12:09.733 "trtype": "TCP", 00:12:09.733 "adrfam": "IPv4", 00:12:09.733 "traddr": "10.0.0.1", 00:12:09.733 "trsvcid": "59478" 00:12:09.733 }, 00:12:09.733 "auth": { 00:12:09.733 "state": "completed", 00:12:09.733 "digest": "sha512", 00:12:09.733 "dhgroup": "ffdhe2048" 00:12:09.733 } 00:12:09.733 } 00:12:09.733 ]' 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.733 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.991 07:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.933 07:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:11.500 00:12:11.500 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.500 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.500 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.758 { 00:12:11.758 "cntlid": 111, 00:12:11.758 "qid": 0, 00:12:11.758 "state": "enabled", 00:12:11.758 "thread": "nvmf_tgt_poll_group_000", 00:12:11.758 "listen_address": { 00:12:11.758 "trtype": "TCP", 00:12:11.758 "adrfam": "IPv4", 00:12:11.758 "traddr": "10.0.0.2", 00:12:11.758 "trsvcid": "4420" 00:12:11.758 }, 00:12:11.758 "peer_address": { 00:12:11.758 "trtype": "TCP", 00:12:11.758 "adrfam": "IPv4", 00:12:11.758 "traddr": "10.0.0.1", 00:12:11.758 "trsvcid": "33626" 00:12:11.758 }, 00:12:11.758 "auth": { 00:12:11.758 "state": "completed", 00:12:11.758 "digest": "sha512", 00:12:11.758 "dhgroup": "ffdhe2048" 00:12:11.758 } 00:12:11.758 } 00:12:11.758 ]' 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.758 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.016 07:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:12:12.949 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:12.950 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.208 07:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.467 00:12:13.467 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.467 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.467 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.726 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.726 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.726 07:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.726 07:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.726 07:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.726 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.726 { 00:12:13.726 "cntlid": 113, 00:12:13.726 "qid": 0, 00:12:13.726 "state": "enabled", 00:12:13.726 "thread": "nvmf_tgt_poll_group_000", 00:12:13.726 "listen_address": { 00:12:13.726 "trtype": "TCP", 00:12:13.726 "adrfam": "IPv4", 00:12:13.726 "traddr": "10.0.0.2", 00:12:13.726 "trsvcid": "4420" 00:12:13.726 }, 00:12:13.726 "peer_address": { 00:12:13.726 "trtype": "TCP", 00:12:13.726 "adrfam": "IPv4", 00:12:13.726 "traddr": "10.0.0.1", 00:12:13.726 "trsvcid": "33656" 00:12:13.726 }, 00:12:13.726 "auth": { 00:12:13.726 "state": "completed", 00:12:13.726 "digest": "sha512", 00:12:13.726 "dhgroup": "ffdhe3072" 00:12:13.726 } 00:12:13.726 } 00:12:13.726 ]' 00:12:13.726 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.985 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.985 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.985 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:13.985 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.985 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.985 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.985 07:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.244 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:12:14.847 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.847 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:14.847 07:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.847 07:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.847 07:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.848 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.848 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.848 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.106 07:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.363 00:12:15.621 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.621 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.621 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.879 { 00:12:15.879 "cntlid": 115, 00:12:15.879 "qid": 0, 00:12:15.879 "state": "enabled", 00:12:15.879 "thread": "nvmf_tgt_poll_group_000", 00:12:15.879 "listen_address": { 00:12:15.879 "trtype": "TCP", 00:12:15.879 "adrfam": "IPv4", 00:12:15.879 "traddr": "10.0.0.2", 00:12:15.879 "trsvcid": "4420" 00:12:15.879 }, 00:12:15.879 "peer_address": { 00:12:15.879 "trtype": "TCP", 00:12:15.879 "adrfam": "IPv4", 00:12:15.879 "traddr": "10.0.0.1", 00:12:15.879 "trsvcid": "33694" 00:12:15.879 }, 00:12:15.879 "auth": { 00:12:15.879 "state": "completed", 00:12:15.879 "digest": "sha512", 00:12:15.879 "dhgroup": "ffdhe3072" 00:12:15.879 } 00:12:15.879 } 00:12:15.879 ]' 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.879 07:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.138 07:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:17.074 07:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.331 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.589 00:12:17.589 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.589 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.589 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.898 { 00:12:17.898 "cntlid": 117, 00:12:17.898 "qid": 0, 00:12:17.898 "state": "enabled", 00:12:17.898 "thread": "nvmf_tgt_poll_group_000", 00:12:17.898 "listen_address": { 00:12:17.898 "trtype": "TCP", 00:12:17.898 "adrfam": "IPv4", 00:12:17.898 "traddr": "10.0.0.2", 00:12:17.898 "trsvcid": "4420" 00:12:17.898 }, 00:12:17.898 "peer_address": { 00:12:17.898 "trtype": "TCP", 00:12:17.898 "adrfam": "IPv4", 00:12:17.898 "traddr": "10.0.0.1", 00:12:17.898 "trsvcid": "33714" 00:12:17.898 }, 00:12:17.898 "auth": { 00:12:17.898 "state": "completed", 00:12:17.898 "digest": "sha512", 00:12:17.898 "dhgroup": "ffdhe3072" 00:12:17.898 } 00:12:17.898 } 00:12:17.898 ]' 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:17.898 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.155 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.155 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.155 07:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.414 07:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:18.980 07:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.237 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.803 00:12:19.803 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.803 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.803 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.061 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.061 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.061 07:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.061 07:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.061 07:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.061 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.061 { 00:12:20.061 "cntlid": 119, 00:12:20.061 "qid": 0, 00:12:20.061 "state": "enabled", 00:12:20.062 "thread": "nvmf_tgt_poll_group_000", 00:12:20.062 "listen_address": { 00:12:20.062 "trtype": "TCP", 00:12:20.062 "adrfam": "IPv4", 00:12:20.062 "traddr": "10.0.0.2", 00:12:20.062 "trsvcid": "4420" 00:12:20.062 }, 00:12:20.062 "peer_address": { 00:12:20.062 "trtype": "TCP", 00:12:20.062 "adrfam": "IPv4", 00:12:20.062 "traddr": "10.0.0.1", 00:12:20.062 "trsvcid": "33754" 00:12:20.062 }, 00:12:20.062 "auth": { 00:12:20.062 "state": "completed", 00:12:20.062 "digest": "sha512", 00:12:20.062 "dhgroup": "ffdhe3072" 00:12:20.062 } 00:12:20.062 } 00:12:20.062 ]' 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.062 07:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.320 07:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:21.255 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.513 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.771 00:12:22.029 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.029 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.029 07:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.288 { 00:12:22.288 "cntlid": 121, 00:12:22.288 "qid": 0, 00:12:22.288 "state": "enabled", 00:12:22.288 "thread": "nvmf_tgt_poll_group_000", 00:12:22.288 "listen_address": { 00:12:22.288 "trtype": "TCP", 00:12:22.288 "adrfam": "IPv4", 00:12:22.288 "traddr": "10.0.0.2", 00:12:22.288 "trsvcid": "4420" 00:12:22.288 }, 00:12:22.288 "peer_address": { 00:12:22.288 "trtype": "TCP", 00:12:22.288 "adrfam": "IPv4", 00:12:22.288 "traddr": "10.0.0.1", 00:12:22.288 "trsvcid": "52872" 00:12:22.288 }, 00:12:22.288 "auth": { 00:12:22.288 "state": "completed", 00:12:22.288 "digest": "sha512", 00:12:22.288 "dhgroup": "ffdhe4096" 00:12:22.288 } 00:12:22.288 } 00:12:22.288 ]' 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.288 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.572 07:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:23.507 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.765 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.077 00:12:24.077 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.077 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.077 07:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.335 { 00:12:24.335 "cntlid": 123, 00:12:24.335 "qid": 0, 00:12:24.335 "state": "enabled", 00:12:24.335 "thread": "nvmf_tgt_poll_group_000", 00:12:24.335 "listen_address": { 00:12:24.335 "trtype": "TCP", 00:12:24.335 "adrfam": "IPv4", 00:12:24.335 "traddr": "10.0.0.2", 00:12:24.335 "trsvcid": "4420" 00:12:24.335 }, 00:12:24.335 "peer_address": { 00:12:24.335 "trtype": "TCP", 00:12:24.335 "adrfam": "IPv4", 00:12:24.335 "traddr": "10.0.0.1", 00:12:24.335 "trsvcid": "52904" 00:12:24.335 }, 00:12:24.335 "auth": { 00:12:24.335 "state": "completed", 00:12:24.335 "digest": "sha512", 00:12:24.335 "dhgroup": "ffdhe4096" 00:12:24.335 } 00:12:24.335 } 00:12:24.335 ]' 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:24.335 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.593 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.593 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.593 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.851 07:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:25.419 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.678 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.245 00:12:26.245 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.245 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.245 07:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.503 { 00:12:26.503 "cntlid": 125, 00:12:26.503 "qid": 0, 00:12:26.503 "state": "enabled", 00:12:26.503 "thread": "nvmf_tgt_poll_group_000", 00:12:26.503 "listen_address": { 00:12:26.503 "trtype": "TCP", 00:12:26.503 "adrfam": "IPv4", 00:12:26.503 "traddr": "10.0.0.2", 00:12:26.503 "trsvcid": "4420" 00:12:26.503 }, 00:12:26.503 "peer_address": { 00:12:26.503 "trtype": "TCP", 00:12:26.503 "adrfam": "IPv4", 00:12:26.503 "traddr": "10.0.0.1", 00:12:26.503 "trsvcid": "52932" 00:12:26.503 }, 00:12:26.503 "auth": { 00:12:26.503 "state": "completed", 00:12:26.503 "digest": "sha512", 00:12:26.503 "dhgroup": "ffdhe4096" 00:12:26.503 } 00:12:26.503 } 00:12:26.503 ]' 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.503 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.504 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.504 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:26.504 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.762 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.762 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.762 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.021 07:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:27.587 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:27.845 07:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.411 00:12:28.411 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.411 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.411 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.669 { 00:12:28.669 "cntlid": 127, 00:12:28.669 "qid": 0, 00:12:28.669 "state": "enabled", 00:12:28.669 "thread": "nvmf_tgt_poll_group_000", 00:12:28.669 "listen_address": { 00:12:28.669 "trtype": "TCP", 00:12:28.669 "adrfam": "IPv4", 00:12:28.669 "traddr": "10.0.0.2", 00:12:28.669 "trsvcid": "4420" 00:12:28.669 }, 00:12:28.669 "peer_address": { 00:12:28.669 "trtype": "TCP", 00:12:28.669 "adrfam": "IPv4", 00:12:28.669 "traddr": "10.0.0.1", 00:12:28.669 "trsvcid": "52954" 00:12:28.669 }, 00:12:28.669 "auth": { 00:12:28.669 "state": "completed", 00:12:28.669 "digest": "sha512", 00:12:28.669 "dhgroup": "ffdhe4096" 00:12:28.669 } 00:12:28.669 } 00:12:28.669 ]' 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.669 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.670 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:28.670 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.928 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.928 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.928 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.187 07:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:12:29.753 07:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.012 07:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.272 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.864 00:12:30.864 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.864 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.864 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.122 { 00:12:31.122 "cntlid": 129, 00:12:31.122 "qid": 0, 00:12:31.122 "state": "enabled", 00:12:31.122 "thread": "nvmf_tgt_poll_group_000", 00:12:31.122 "listen_address": { 00:12:31.122 "trtype": "TCP", 00:12:31.122 "adrfam": "IPv4", 00:12:31.122 "traddr": "10.0.0.2", 00:12:31.122 "trsvcid": "4420" 00:12:31.122 }, 00:12:31.122 "peer_address": { 00:12:31.122 "trtype": "TCP", 00:12:31.122 "adrfam": "IPv4", 00:12:31.122 "traddr": "10.0.0.1", 00:12:31.122 "trsvcid": "39854" 00:12:31.122 }, 00:12:31.122 "auth": { 00:12:31.122 "state": "completed", 00:12:31.122 "digest": "sha512", 00:12:31.122 "dhgroup": "ffdhe6144" 00:12:31.122 } 00:12:31.122 } 00:12:31.122 ]' 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.122 07:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.122 07:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.381 07:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:32.317 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:32.575 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:32.575 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.575 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:32.575 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.576 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.142 00:12:33.142 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.142 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.142 07:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.401 { 00:12:33.401 "cntlid": 131, 00:12:33.401 "qid": 0, 00:12:33.401 "state": "enabled", 00:12:33.401 "thread": "nvmf_tgt_poll_group_000", 00:12:33.401 "listen_address": { 00:12:33.401 "trtype": "TCP", 00:12:33.401 "adrfam": "IPv4", 00:12:33.401 "traddr": "10.0.0.2", 00:12:33.401 "trsvcid": "4420" 00:12:33.401 }, 00:12:33.401 "peer_address": { 00:12:33.401 "trtype": "TCP", 00:12:33.401 "adrfam": "IPv4", 00:12:33.401 "traddr": "10.0.0.1", 00:12:33.401 "trsvcid": "39880" 00:12:33.401 }, 00:12:33.401 "auth": { 00:12:33.401 "state": "completed", 00:12:33.401 "digest": "sha512", 00:12:33.401 "dhgroup": "ffdhe6144" 00:12:33.401 } 00:12:33.401 } 00:12:33.401 ]' 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:33.401 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.659 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.659 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.659 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.917 07:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:12:34.489 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.489 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:34.489 07:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.489 07:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.746 07:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.746 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.746 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:34.746 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.004 07:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.569 00:12:35.569 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.569 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.569 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.827 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.828 { 00:12:35.828 "cntlid": 133, 00:12:35.828 "qid": 0, 00:12:35.828 "state": "enabled", 00:12:35.828 "thread": "nvmf_tgt_poll_group_000", 00:12:35.828 "listen_address": { 00:12:35.828 "trtype": "TCP", 00:12:35.828 "adrfam": "IPv4", 00:12:35.828 "traddr": "10.0.0.2", 00:12:35.828 "trsvcid": "4420" 00:12:35.828 }, 00:12:35.828 "peer_address": { 00:12:35.828 "trtype": "TCP", 00:12:35.828 "adrfam": "IPv4", 00:12:35.828 "traddr": "10.0.0.1", 00:12:35.828 "trsvcid": "39920" 00:12:35.828 }, 00:12:35.828 "auth": { 00:12:35.828 "state": "completed", 00:12:35.828 "digest": "sha512", 00:12:35.828 "dhgroup": "ffdhe6144" 00:12:35.828 } 00:12:35.828 } 00:12:35.828 ]' 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:35.828 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.086 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.086 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.086 07:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.350 07:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:37.285 07:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.285 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.851 00:12:37.851 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.851 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.851 07:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.108 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.108 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.108 07:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.108 07:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.108 07:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.108 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.108 { 00:12:38.108 "cntlid": 135, 00:12:38.108 "qid": 0, 00:12:38.108 "state": "enabled", 00:12:38.108 "thread": "nvmf_tgt_poll_group_000", 00:12:38.108 "listen_address": { 00:12:38.108 "trtype": "TCP", 00:12:38.108 "adrfam": "IPv4", 00:12:38.108 "traddr": "10.0.0.2", 00:12:38.108 "trsvcid": "4420" 00:12:38.108 }, 00:12:38.108 "peer_address": { 00:12:38.108 "trtype": "TCP", 00:12:38.108 "adrfam": "IPv4", 00:12:38.108 "traddr": "10.0.0.1", 00:12:38.108 "trsvcid": "39950" 00:12:38.108 }, 00:12:38.108 "auth": { 00:12:38.108 "state": "completed", 00:12:38.108 "digest": "sha512", 00:12:38.108 "dhgroup": "ffdhe6144" 00:12:38.108 } 00:12:38.108 } 00:12:38.108 ]' 00:12:38.108 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.366 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.366 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.366 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:38.366 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.366 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.366 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.366 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.624 07:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.558 07:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.816 07:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.816 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.816 07:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.381 00:12:40.381 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.381 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.381 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.639 { 00:12:40.639 "cntlid": 137, 00:12:40.639 "qid": 0, 00:12:40.639 "state": "enabled", 00:12:40.639 "thread": "nvmf_tgt_poll_group_000", 00:12:40.639 "listen_address": { 00:12:40.639 "trtype": "TCP", 00:12:40.639 "adrfam": "IPv4", 00:12:40.639 "traddr": "10.0.0.2", 00:12:40.639 "trsvcid": "4420" 00:12:40.639 }, 00:12:40.639 "peer_address": { 00:12:40.639 "trtype": "TCP", 00:12:40.639 "adrfam": "IPv4", 00:12:40.639 "traddr": "10.0.0.1", 00:12:40.639 "trsvcid": "33578" 00:12:40.639 }, 00:12:40.639 "auth": { 00:12:40.639 "state": "completed", 00:12:40.639 "digest": "sha512", 00:12:40.639 "dhgroup": "ffdhe8192" 00:12:40.639 } 00:12:40.639 } 00:12:40.639 ]' 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.639 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.204 07:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:41.818 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.091 07:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.657 00:12:42.657 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.657 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.657 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.953 { 00:12:42.953 "cntlid": 139, 00:12:42.953 "qid": 0, 00:12:42.953 "state": "enabled", 00:12:42.953 "thread": "nvmf_tgt_poll_group_000", 00:12:42.953 "listen_address": { 00:12:42.953 "trtype": "TCP", 00:12:42.953 "adrfam": "IPv4", 00:12:42.953 "traddr": "10.0.0.2", 00:12:42.953 "trsvcid": "4420" 00:12:42.953 }, 00:12:42.953 "peer_address": { 00:12:42.953 "trtype": "TCP", 00:12:42.953 "adrfam": "IPv4", 00:12:42.953 "traddr": "10.0.0.1", 00:12:42.953 "trsvcid": "33612" 00:12:42.953 }, 00:12:42.953 "auth": { 00:12:42.953 "state": "completed", 00:12:42.953 "digest": "sha512", 00:12:42.953 "dhgroup": "ffdhe8192" 00:12:42.953 } 00:12:42.953 } 00:12:42.953 ]' 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:42.953 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.211 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.211 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.211 07:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.468 07:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:01:MWUwOWZhMjI2ZWFkMWM5MDgyM2ZlNjcwZjYxM2JmNTYnhNf7: --dhchap-ctrl-secret DHHC-1:02:MTY4MDZjODQ4MjU3Yzc0MzdkNWU3ZTI4ZDFkMzQyMTQ5YjMzZDVhYWFjODg3ODZlm/yyyA==: 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:44.033 07:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.291 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.855 00:12:44.855 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.855 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.855 07:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.112 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.112 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.112 07:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.112 07:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.112 07:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.112 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.112 { 00:12:45.112 "cntlid": 141, 00:12:45.112 "qid": 0, 00:12:45.112 "state": "enabled", 00:12:45.112 "thread": "nvmf_tgt_poll_group_000", 00:12:45.112 "listen_address": { 00:12:45.112 "trtype": "TCP", 00:12:45.112 "adrfam": "IPv4", 00:12:45.112 "traddr": "10.0.0.2", 00:12:45.112 "trsvcid": "4420" 00:12:45.112 }, 00:12:45.112 "peer_address": { 00:12:45.112 "trtype": "TCP", 00:12:45.112 "adrfam": "IPv4", 00:12:45.112 "traddr": "10.0.0.1", 00:12:45.112 "trsvcid": "33644" 00:12:45.112 }, 00:12:45.112 "auth": { 00:12:45.112 "state": "completed", 00:12:45.112 "digest": "sha512", 00:12:45.112 "dhgroup": "ffdhe8192" 00:12:45.112 } 00:12:45.112 } 00:12:45.112 ]' 00:12:45.112 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.369 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.369 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.369 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.369 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.369 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.369 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.369 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.626 07:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:02:MWE3MDIxYWMxYzgzMWU1YzZhZjMwNTQ5M2Q5ZDA2MWQ4NWVkNThhOGRiODg5NmE22Evdew==: --dhchap-ctrl-secret DHHC-1:01:NmEzYzQ4ZWEyMzBhNmY3OWM0ZjYyOTJmMjhjMTAzYTC0B3mf: 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.562 07:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.499 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.499 { 00:12:47.499 "cntlid": 143, 00:12:47.499 "qid": 0, 00:12:47.499 "state": "enabled", 00:12:47.499 "thread": "nvmf_tgt_poll_group_000", 00:12:47.499 "listen_address": { 00:12:47.499 "trtype": "TCP", 00:12:47.499 "adrfam": "IPv4", 00:12:47.499 "traddr": "10.0.0.2", 00:12:47.499 "trsvcid": "4420" 00:12:47.499 }, 00:12:47.499 "peer_address": { 00:12:47.499 "trtype": "TCP", 00:12:47.499 "adrfam": "IPv4", 00:12:47.499 "traddr": "10.0.0.1", 00:12:47.499 "trsvcid": "33664" 00:12:47.499 }, 00:12:47.499 "auth": { 00:12:47.499 "state": "completed", 00:12:47.499 "digest": "sha512", 00:12:47.499 "dhgroup": "ffdhe8192" 00:12:47.499 } 00:12:47.499 } 00:12:47.499 ]' 00:12:47.499 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.757 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.757 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.757 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.757 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.757 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.757 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.757 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.015 07:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.680 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.939 07:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.197 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.197 07:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.765 00:12:49.765 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.765 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.765 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.024 { 00:12:50.024 "cntlid": 145, 00:12:50.024 "qid": 0, 00:12:50.024 "state": "enabled", 00:12:50.024 "thread": "nvmf_tgt_poll_group_000", 00:12:50.024 "listen_address": { 00:12:50.024 "trtype": "TCP", 00:12:50.024 "adrfam": "IPv4", 00:12:50.024 "traddr": "10.0.0.2", 00:12:50.024 "trsvcid": "4420" 00:12:50.024 }, 00:12:50.024 "peer_address": { 00:12:50.024 "trtype": "TCP", 00:12:50.024 "adrfam": "IPv4", 00:12:50.024 "traddr": "10.0.0.1", 00:12:50.024 "trsvcid": "33692" 00:12:50.024 }, 00:12:50.024 "auth": { 00:12:50.024 "state": "completed", 00:12:50.024 "digest": "sha512", 00:12:50.024 "dhgroup": "ffdhe8192" 00:12:50.024 } 00:12:50.024 } 00:12:50.024 ]' 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.024 07:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.282 07:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.282 07:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.282 07:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.540 07:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:00:YjBmM2JlNWMzOTVmYzk2OGQyMzA3MjgwNWUzMjZlOTQ5YjA1YzExNTI2MThkNGI15W2YHA==: --dhchap-ctrl-secret DHHC-1:03:YzViNzAwMzVjMzk5YThlZTE0ZjY0ZTk1OGQ4MTNlYzViZGIzNWE5NDYxZWM3ZmQxNmI5YzM2YTliOWZjOWFhMePeSLs=: 00:12:51.107 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.107 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:51.107 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.107 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.107 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.107 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:51.366 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:51.933 request: 00:12:51.933 { 00:12:51.933 "name": "nvme0", 00:12:51.933 "trtype": "tcp", 00:12:51.933 "traddr": "10.0.0.2", 00:12:51.933 "adrfam": "ipv4", 00:12:51.933 "trsvcid": "4420", 00:12:51.933 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:51.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9", 00:12:51.933 "prchk_reftag": false, 00:12:51.933 "prchk_guard": false, 00:12:51.933 "hdgst": false, 00:12:51.933 "ddgst": false, 00:12:51.933 "dhchap_key": "key2", 00:12:51.933 "method": "bdev_nvme_attach_controller", 00:12:51.933 "req_id": 1 00:12:51.933 } 00:12:51.933 Got JSON-RPC error response 00:12:51.933 response: 00:12:51.933 { 00:12:51.933 "code": -5, 00:12:51.933 "message": "Input/output error" 00:12:51.933 } 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:51.933 07:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:52.501 request: 00:12:52.501 { 00:12:52.501 "name": "nvme0", 00:12:52.501 "trtype": "tcp", 00:12:52.501 "traddr": "10.0.0.2", 00:12:52.501 "adrfam": "ipv4", 00:12:52.501 "trsvcid": "4420", 00:12:52.501 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:52.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9", 00:12:52.501 "prchk_reftag": false, 00:12:52.501 "prchk_guard": false, 00:12:52.501 "hdgst": false, 00:12:52.501 "ddgst": false, 00:12:52.501 "dhchap_key": "key1", 00:12:52.501 "dhchap_ctrlr_key": "ckey2", 00:12:52.501 "method": "bdev_nvme_attach_controller", 00:12:52.501 "req_id": 1 00:12:52.501 } 00:12:52.501 Got JSON-RPC error response 00:12:52.501 response: 00:12:52.501 { 00:12:52.501 "code": -5, 00:12:52.501 "message": "Input/output error" 00:12:52.501 } 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key1 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.501 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.502 07:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.070 request: 00:12:53.070 { 00:12:53.070 "name": "nvme0", 00:12:53.070 "trtype": "tcp", 00:12:53.070 "traddr": "10.0.0.2", 00:12:53.070 "adrfam": "ipv4", 00:12:53.070 "trsvcid": "4420", 00:12:53.070 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:53.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9", 00:12:53.070 "prchk_reftag": false, 00:12:53.070 "prchk_guard": false, 00:12:53.070 "hdgst": false, 00:12:53.070 "ddgst": false, 00:12:53.070 "dhchap_key": "key1", 00:12:53.070 "dhchap_ctrlr_key": "ckey1", 00:12:53.070 "method": "bdev_nvme_attach_controller", 00:12:53.070 "req_id": 1 00:12:53.070 } 00:12:53.070 Got JSON-RPC error response 00:12:53.070 response: 00:12:53.070 { 00:12:53.070 "code": -5, 00:12:53.070 "message": "Input/output error" 00:12:53.070 } 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69074 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69074 ']' 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69074 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69074 00:12:53.070 killing process with pid 69074 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69074' 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69074 00:12:53.070 07:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69074 00:12:53.328 07:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:53.328 07:17:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.328 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:53.328 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72226 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72226 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72226 ']' 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.329 07:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:54.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72226 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72226 ']' 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.265 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.523 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.523 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:54.523 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:54.523 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.523 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.780 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:54.781 07:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:55.347 00:12:55.347 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.347 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.347 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.606 { 00:12:55.606 "cntlid": 1, 00:12:55.606 "qid": 0, 00:12:55.606 "state": "enabled", 00:12:55.606 "thread": "nvmf_tgt_poll_group_000", 00:12:55.606 "listen_address": { 00:12:55.606 "trtype": "TCP", 00:12:55.606 "adrfam": "IPv4", 00:12:55.606 "traddr": "10.0.0.2", 00:12:55.606 "trsvcid": "4420" 00:12:55.606 }, 00:12:55.606 "peer_address": { 00:12:55.606 "trtype": "TCP", 00:12:55.606 "adrfam": "IPv4", 00:12:55.606 "traddr": "10.0.0.1", 00:12:55.606 "trsvcid": "36122" 00:12:55.606 }, 00:12:55.606 "auth": { 00:12:55.606 "state": "completed", 00:12:55.606 "digest": "sha512", 00:12:55.606 "dhgroup": "ffdhe8192" 00:12:55.606 } 00:12:55.606 } 00:12:55.606 ]' 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.606 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.865 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.865 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.865 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.865 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.865 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.123 07:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-secret DHHC-1:03:MjY3OWFjYTBmNWU1NWFhYjcyZTUyNmI3ZjNiODg3NWRkZmIzMDU3MGFjNWQ1MTJlNjBjMzQyZDg1ZGM4YjA2ZjRizus=: 00:12:56.690 07:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --dhchap-key key3 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:56.691 07:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.949 07:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.515 request: 00:12:57.515 { 00:12:57.515 "name": "nvme0", 00:12:57.515 "trtype": "tcp", 00:12:57.515 "traddr": "10.0.0.2", 00:12:57.515 "adrfam": "ipv4", 00:12:57.515 "trsvcid": "4420", 00:12:57.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:57.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9", 00:12:57.515 "prchk_reftag": false, 00:12:57.515 "prchk_guard": false, 00:12:57.516 "hdgst": false, 00:12:57.516 "ddgst": false, 00:12:57.516 "dhchap_key": "key3", 00:12:57.516 "method": "bdev_nvme_attach_controller", 00:12:57.516 "req_id": 1 00:12:57.516 } 00:12:57.516 Got JSON-RPC error response 00:12:57.516 response: 00:12:57.516 { 00:12:57.516 "code": -5, 00:12:57.516 "message": "Input/output error" 00:12:57.516 } 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:57.516 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.774 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.033 request: 00:12:58.033 { 00:12:58.033 "name": "nvme0", 00:12:58.033 "trtype": "tcp", 00:12:58.033 "traddr": "10.0.0.2", 00:12:58.033 "adrfam": "ipv4", 00:12:58.033 "trsvcid": "4420", 00:12:58.033 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:58.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9", 00:12:58.033 "prchk_reftag": false, 00:12:58.033 "prchk_guard": false, 00:12:58.033 "hdgst": false, 00:12:58.033 "ddgst": false, 00:12:58.033 "dhchap_key": "key3", 00:12:58.033 "method": "bdev_nvme_attach_controller", 00:12:58.033 "req_id": 1 00:12:58.033 } 00:12:58.033 Got JSON-RPC error response 00:12:58.033 response: 00:12:58.033 { 00:12:58.033 "code": -5, 00:12:58.033 "message": "Input/output error" 00:12:58.033 } 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:58.033 07:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:58.292 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:58.551 request: 00:12:58.551 { 00:12:58.551 "name": "nvme0", 00:12:58.551 "trtype": "tcp", 00:12:58.551 "traddr": "10.0.0.2", 00:12:58.551 "adrfam": "ipv4", 00:12:58.551 "trsvcid": "4420", 00:12:58.551 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:58.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9", 00:12:58.551 "prchk_reftag": false, 00:12:58.551 "prchk_guard": false, 00:12:58.551 "hdgst": false, 00:12:58.551 "ddgst": false, 00:12:58.551 "dhchap_key": "key0", 00:12:58.551 "dhchap_ctrlr_key": "key1", 00:12:58.551 "method": "bdev_nvme_attach_controller", 00:12:58.551 "req_id": 1 00:12:58.551 } 00:12:58.551 Got JSON-RPC error response 00:12:58.551 response: 00:12:58.551 { 00:12:58.551 "code": -5, 00:12:58.551 "message": "Input/output error" 00:12:58.551 } 00:12:58.551 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:58.551 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.551 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.551 07:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.551 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:58.551 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:58.808 00:12:59.065 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:59.065 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.065 07:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:59.324 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.324 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.324 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69093 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69093 ']' 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69093 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69093 00:12:59.583 killing process with pid 69093 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69093' 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69093 00:12:59.583 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69093 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.842 rmmod nvme_tcp 00:12:59.842 rmmod nvme_fabrics 00:12:59.842 rmmod nvme_keyring 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72226 ']' 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72226 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72226 ']' 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72226 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72226 00:12:59.842 killing process with pid 72226 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72226' 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72226 00:12:59.842 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72226 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.OtN /tmp/spdk.key-sha256.VHW /tmp/spdk.key-sha384.Exp /tmp/spdk.key-sha512.MIs /tmp/spdk.key-sha512.YJw /tmp/spdk.key-sha384.fxl /tmp/spdk.key-sha256.pZS '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:00.101 00:13:00.101 real 3m0.615s 00:13:00.101 user 7m15.177s 00:13:00.101 sys 0m27.128s 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.101 ************************************ 00:13:00.101 END TEST nvmf_auth_target 00:13:00.101 ************************************ 00:13:00.101 07:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.101 07:17:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:00.101 07:17:09 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:00.101 07:17:09 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:00.101 07:17:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:00.101 07:17:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.101 07:17:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.101 ************************************ 00:13:00.101 START TEST nvmf_bdevio_no_huge 00:13:00.101 ************************************ 00:13:00.101 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:00.364 * Looking for test storage... 00:13:00.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.364 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:00.365 Cannot find device "nvmf_tgt_br" 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:00.365 Cannot find device "nvmf_tgt_br2" 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:00.365 Cannot find device "nvmf_tgt_br" 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:00.365 Cannot find device "nvmf_tgt_br2" 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:00.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:00.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:00.365 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:00.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:00.624 00:13:00.624 --- 10.0.0.2 ping statistics --- 00:13:00.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.624 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:00.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:00.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:13:00.624 00:13:00.624 --- 10.0.0.3 ping statistics --- 00:13:00.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.624 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:00.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:00.624 00:13:00.624 --- 10.0.0.1 ping statistics --- 00:13:00.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.624 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72539 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72539 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72539 ']' 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.624 07:17:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.624 [2024-07-15 07:17:09.563147] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:00.625 [2024-07-15 07:17:09.563270] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:00.882 [2024-07-15 07:17:09.712805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.141 [2024-07-15 07:17:09.852046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.141 [2024-07-15 07:17:09.852555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.141 [2024-07-15 07:17:09.853217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.141 [2024-07-15 07:17:09.853941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.141 [2024-07-15 07:17:09.854323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.141 [2024-07-15 07:17:09.854710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:01.141 [2024-07-15 07:17:09.854850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:01.141 [2024-07-15 07:17:09.854975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:01.141 [2024-07-15 07:17:09.854979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.141 [2024-07-15 07:17:09.861742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:01.707 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.707 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 [2024-07-15 07:17:10.635369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 Malloc0 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.708 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.967 [2024-07-15 07:17:10.675575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:01.967 { 00:13:01.967 "params": { 00:13:01.967 "name": "Nvme$subsystem", 00:13:01.967 "trtype": "$TEST_TRANSPORT", 00:13:01.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:01.967 "adrfam": "ipv4", 00:13:01.967 "trsvcid": "$NVMF_PORT", 00:13:01.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:01.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:01.967 "hdgst": ${hdgst:-false}, 00:13:01.967 "ddgst": ${ddgst:-false} 00:13:01.967 }, 00:13:01.967 "method": "bdev_nvme_attach_controller" 00:13:01.967 } 00:13:01.967 EOF 00:13:01.967 )") 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:01.967 07:17:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:01.967 "params": { 00:13:01.967 "name": "Nvme1", 00:13:01.967 "trtype": "tcp", 00:13:01.967 "traddr": "10.0.0.2", 00:13:01.967 "adrfam": "ipv4", 00:13:01.967 "trsvcid": "4420", 00:13:01.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:01.967 "hdgst": false, 00:13:01.967 "ddgst": false 00:13:01.967 }, 00:13:01.967 "method": "bdev_nvme_attach_controller" 00:13:01.967 }' 00:13:01.967 [2024-07-15 07:17:10.730665] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:01.967 [2024-07-15 07:17:10.731372] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72581 ] 00:13:01.967 [2024-07-15 07:17:10.877282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.224 [2024-07-15 07:17:10.992318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.224 [2024-07-15 07:17:10.992397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.224 [2024-07-15 07:17:10.992406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.224 [2024-07-15 07:17:11.005703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:02.224 I/O targets: 00:13:02.224 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:02.224 00:13:02.224 00:13:02.224 CUnit - A unit testing framework for C - Version 2.1-3 00:13:02.224 http://cunit.sourceforge.net/ 00:13:02.224 00:13:02.224 00:13:02.224 Suite: bdevio tests on: Nvme1n1 00:13:02.224 Test: blockdev write read block ...passed 00:13:02.224 Test: blockdev write zeroes read block ...passed 00:13:02.224 Test: blockdev write zeroes read no split ...passed 00:13:02.480 Test: blockdev write zeroes read split ...passed 00:13:02.480 Test: blockdev write zeroes read split partial ...passed 00:13:02.481 Test: blockdev reset ...[2024-07-15 07:17:11.190272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:02.481 [2024-07-15 07:17:11.190399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1768870 (9): Bad file descriptor 00:13:02.481 passed 00:13:02.481 Test: blockdev write read 8 blocks ...[2024-07-15 07:17:11.207387] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:02.481 passed 00:13:02.481 Test: blockdev write read size > 128k ...passed 00:13:02.481 Test: blockdev write read invalid size ...passed 00:13:02.481 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.481 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.481 Test: blockdev write read max offset ...passed 00:13:02.481 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.481 Test: blockdev writev readv 8 blocks ...passed 00:13:02.481 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.481 Test: blockdev writev readv block ...passed 00:13:02.481 Test: blockdev writev readv size > 128k ...passed 00:13:02.481 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.481 Test: blockdev comparev and writev ...[2024-07-15 07:17:11.217136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.217193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.217221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.217235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.217606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.217635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.217657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.217670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.218052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.218119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.218152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.218169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:02.481 passed 00:13:02.481 Test: blockdev nvme passthru rw ...[2024-07-15 07:17:11.218639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.218678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.218701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.481 [2024-07-15 07:17:11.218713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:02.481 passed 00:13:02.481 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.481 Test: blockdev nvme admin passthru ...[2024-07-15 07:17:11.219581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.481 [2024-07-15 07:17:11.219621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.219755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.481 [2024-07-15 07:17:11.219775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.219896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.481 [2024-07-15 07:17:11.219918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:02.481 [2024-07-15 07:17:11.220059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.481 [2024-07-15 07:17:11.220106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:02.481 passed 00:13:02.481 Test: blockdev copy ...passed 00:13:02.481 00:13:02.481 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.481 suites 1 1 n/a 0 0 00:13:02.481 tests 23 23 23 0 0 00:13:02.481 asserts 152 152 152 0 n/a 00:13:02.481 00:13:02.481 Elapsed time = 0.166 seconds 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.738 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.738 rmmod nvme_tcp 00:13:02.738 rmmod nvme_fabrics 00:13:02.738 rmmod nvme_keyring 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72539 ']' 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72539 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72539 ']' 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72539 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72539 00:13:02.995 killing process with pid 72539 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72539' 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72539 00:13:02.995 07:17:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72539 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:03.253 00:13:03.253 real 0m3.112s 00:13:03.253 user 0m10.093s 00:13:03.253 sys 0m1.194s 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.253 07:17:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:03.253 ************************************ 00:13:03.253 END TEST nvmf_bdevio_no_huge 00:13:03.253 ************************************ 00:13:03.253 07:17:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:03.253 07:17:12 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:03.253 07:17:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:03.253 07:17:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.253 07:17:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:03.253 ************************************ 00:13:03.253 START TEST nvmf_tls 00:13:03.253 ************************************ 00:13:03.253 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:03.511 * Looking for test storage... 00:13:03.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.511 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:03.512 Cannot find device "nvmf_tgt_br" 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:03.512 Cannot find device "nvmf_tgt_br2" 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:03.512 Cannot find device "nvmf_tgt_br" 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:03.512 Cannot find device "nvmf_tgt_br2" 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:03.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:03.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:03.512 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:03.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:03.769 00:13:03.769 --- 10.0.0.2 ping statistics --- 00:13:03.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.769 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:03.769 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:03.769 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:13:03.769 00:13:03.769 --- 10.0.0.3 ping statistics --- 00:13:03.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.769 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:03.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:03.769 00:13:03.769 --- 10.0.0.1 ping statistics --- 00:13:03.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.769 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72763 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72763 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72763 ']' 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.769 07:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:04.027 [2024-07-15 07:17:12.732214] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:04.028 [2024-07-15 07:17:12.732346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.028 [2024-07-15 07:17:12.872382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.028 [2024-07-15 07:17:12.959873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.028 [2024-07-15 07:17:12.959962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.028 [2024-07-15 07:17:12.959981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.028 [2024-07-15 07:17:12.959994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.028 [2024-07-15 07:17:12.960006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.028 [2024-07-15 07:17:12.960053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:04.959 07:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:05.218 true 00:13:05.218 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:05.218 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.475 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:05.475 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:05.475 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:05.733 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:05.733 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.991 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:05.991 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:05.991 07:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:06.250 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:06.250 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:06.509 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:06.509 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:06.509 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:06.767 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:06.767 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:06.767 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:06.767 07:17:15 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:07.333 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:07.333 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:07.333 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:07.333 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:07.333 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:07.632 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:07.632 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:07.894 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:07.895 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:07.895 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:07.895 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:07.895 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:07.895 07:17:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.X2l3ElTpXW 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.H7v7mznCAw 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.X2l3ElTpXW 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.H7v7mznCAw 00:13:08.153 07:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:08.411 07:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:08.670 [2024-07-15 07:17:17.424683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:08.670 07:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.X2l3ElTpXW 00:13:08.670 07:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.X2l3ElTpXW 00:13:08.670 07:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:08.929 [2024-07-15 07:17:17.723423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.929 07:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:09.187 07:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:09.444 [2024-07-15 07:17:18.231570] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:09.445 [2024-07-15 07:17:18.231872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.445 07:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:09.703 malloc0 00:13:09.703 07:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:09.961 07:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X2l3ElTpXW 00:13:10.529 [2024-07-15 07:17:19.174898] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:10.529 07:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.X2l3ElTpXW 00:13:20.497 Initializing NVMe Controllers 00:13:20.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:20.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:20.497 Initialization complete. Launching workers. 00:13:20.497 ======================================================== 00:13:20.497 Latency(us) 00:13:20.497 Device Information : IOPS MiB/s Average min max 00:13:20.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8799.98 34.37 7274.68 1374.73 13029.71 00:13:20.497 ======================================================== 00:13:20.497 Total : 8799.98 34.37 7274.68 1374.73 13029.71 00:13:20.497 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X2l3ElTpXW 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.X2l3ElTpXW' 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72998 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72998 /var/tmp/bdevperf.sock 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72998 ']' 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.497 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.497 [2024-07-15 07:17:29.427901] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:20.497 [2024-07-15 07:17:29.427979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72998 ] 00:13:20.756 [2024-07-15 07:17:29.566217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.756 [2024-07-15 07:17:29.636118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.756 [2024-07-15 07:17:29.669408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.015 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.015 07:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:21.015 07:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X2l3ElTpXW 00:13:21.015 [2024-07-15 07:17:29.929362] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:21.015 [2024-07-15 07:17:29.929522] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:21.274 TLSTESTn1 00:13:21.274 07:17:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:21.274 Running I/O for 10 seconds... 00:13:31.262 00:13:31.262 Latency(us) 00:13:31.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.262 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:31.262 Verification LBA range: start 0x0 length 0x2000 00:13:31.262 TLSTESTn1 : 10.04 3672.89 14.35 0.00 0.00 34760.76 9592.09 31933.91 00:13:31.262 =================================================================================================================== 00:13:31.262 Total : 3672.89 14.35 0.00 0.00 34760.76 9592.09 31933.91 00:13:31.262 0 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72998 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72998 ']' 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72998 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72998 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:31.262 killing process with pid 72998 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72998' 00:13:31.262 Received shutdown signal, test time was about 10.000000 seconds 00:13:31.262 00:13:31.262 Latency(us) 00:13:31.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.262 =================================================================================================================== 00:13:31.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72998 00:13:31.262 [2024-07-15 07:17:40.192209] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:31.262 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72998 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H7v7mznCAw 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H7v7mznCAw 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H7v7mznCAw 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H7v7mznCAw' 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73120 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73120 /var/tmp/bdevperf.sock 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73120 ']' 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.521 07:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.521 [2024-07-15 07:17:40.430587] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:31.521 [2024-07-15 07:17:40.430702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73120 ] 00:13:31.780 [2024-07-15 07:17:40.574906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.780 [2024-07-15 07:17:40.635689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.780 [2024-07-15 07:17:40.666045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:32.714 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.714 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:32.714 07:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H7v7mznCAw 00:13:32.973 [2024-07-15 07:17:41.788874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:32.973 [2024-07-15 07:17:41.789003] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:32.973 [2024-07-15 07:17:41.794061] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:32.973 [2024-07-15 07:17:41.794650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb51f0 (107): Transport endpoint is not connected 00:13:32.973 [2024-07-15 07:17:41.795643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb51f0 (9): Bad file descriptor 00:13:32.973 [2024-07-15 07:17:41.796630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:32.973 [2024-07-15 07:17:41.796654] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:32.973 [2024-07-15 07:17:41.796669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:32.973 request: 00:13:32.973 { 00:13:32.973 "name": "TLSTEST", 00:13:32.973 "trtype": "tcp", 00:13:32.973 "traddr": "10.0.0.2", 00:13:32.973 "adrfam": "ipv4", 00:13:32.973 "trsvcid": "4420", 00:13:32.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.973 "prchk_reftag": false, 00:13:32.973 "prchk_guard": false, 00:13:32.973 "hdgst": false, 00:13:32.973 "ddgst": false, 00:13:32.973 "psk": "/tmp/tmp.H7v7mznCAw", 00:13:32.973 "method": "bdev_nvme_attach_controller", 00:13:32.973 "req_id": 1 00:13:32.973 } 00:13:32.973 Got JSON-RPC error response 00:13:32.973 response: 00:13:32.973 { 00:13:32.973 "code": -5, 00:13:32.973 "message": "Input/output error" 00:13:32.973 } 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73120 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73120 ']' 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73120 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73120 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:32.973 killing process with pid 73120 00:13:32.973 Received shutdown signal, test time was about 10.000000 seconds 00:13:32.973 00:13:32.973 Latency(us) 00:13:32.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.973 =================================================================================================================== 00:13:32.973 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73120' 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73120 00:13:32.973 [2024-07-15 07:17:41.838390] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:32.973 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73120 00:13:33.232 07:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:33.232 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:33.232 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:33.232 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:33.232 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:33.232 07:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X2l3ElTpXW 00:13:33.232 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:33.233 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X2l3ElTpXW 00:13:33.233 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:33.233 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.233 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:33.233 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.233 07:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X2l3ElTpXW 00:13:33.233 07:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.X2l3ElTpXW' 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73142 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73142 /var/tmp/bdevperf.sock 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73142 ']' 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.233 07:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.233 [2024-07-15 07:17:42.055343] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:33.233 [2024-07-15 07:17:42.055443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73142 ] 00:13:33.492 [2024-07-15 07:17:42.196152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.492 [2024-07-15 07:17:42.256132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.492 [2024-07-15 07:17:42.285846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:34.428 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.428 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:34.428 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.X2l3ElTpXW 00:13:34.428 [2024-07-15 07:17:43.364288] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:34.428 [2024-07-15 07:17:43.364429] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:34.428 [2024-07-15 07:17:43.373465] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:34.428 [2024-07-15 07:17:43.373527] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:34.428 [2024-07-15 07:17:43.373595] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:34.428 [2024-07-15 07:17:43.374220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdba1f0 (107): Transport endpoint is not connected 00:13:34.428 [2024-07-15 07:17:43.375203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdba1f0 (9): Bad file descriptor 00:13:34.428 [2024-07-15 07:17:43.376199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:34.428 [2024-07-15 07:17:43.376228] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:34.428 [2024-07-15 07:17:43.376244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:34.428 request: 00:13:34.428 { 00:13:34.428 "name": "TLSTEST", 00:13:34.428 "trtype": "tcp", 00:13:34.428 "traddr": "10.0.0.2", 00:13:34.428 "adrfam": "ipv4", 00:13:34.428 "trsvcid": "4420", 00:13:34.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:34.428 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:34.428 "prchk_reftag": false, 00:13:34.428 "prchk_guard": false, 00:13:34.428 "hdgst": false, 00:13:34.428 "ddgst": false, 00:13:34.428 "psk": "/tmp/tmp.X2l3ElTpXW", 00:13:34.428 "method": "bdev_nvme_attach_controller", 00:13:34.428 "req_id": 1 00:13:34.428 } 00:13:34.428 Got JSON-RPC error response 00:13:34.428 response: 00:13:34.428 { 00:13:34.428 "code": -5, 00:13:34.428 "message": "Input/output error" 00:13:34.428 } 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73142 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73142 ']' 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73142 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73142 00:13:34.686 killing process with pid 73142 00:13:34.686 Received shutdown signal, test time was about 10.000000 seconds 00:13:34.686 00:13:34.686 Latency(us) 00:13:34.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.686 =================================================================================================================== 00:13:34.686 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73142' 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73142 00:13:34.686 [2024-07-15 07:17:43.424667] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73142 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X2l3ElTpXW 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X2l3ElTpXW 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X2l3ElTpXW 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.X2l3ElTpXW' 00:13:34.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73176 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73176 /var/tmp/bdevperf.sock 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73176 ']' 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:34.686 07:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.944 [2024-07-15 07:17:43.650829] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:34.945 [2024-07-15 07:17:43.651181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73176 ] 00:13:34.945 [2024-07-15 07:17:43.800207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.945 [2024-07-15 07:17:43.872219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.203 [2024-07-15 07:17:43.906452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:35.771 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.771 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:35.771 07:17:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X2l3ElTpXW 00:13:36.030 [2024-07-15 07:17:44.799281] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:36.030 [2024-07-15 07:17:44.799661] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:36.030 [2024-07-15 07:17:44.804664] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:36.030 [2024-07-15 07:17:44.804709] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:36.030 [2024-07-15 07:17:44.804766] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:36.030 [2024-07-15 07:17:44.805336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a1f0 (107): Transport endpoint is not connected 00:13:36.030 [2024-07-15 07:17:44.806322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a1f0 (9): Bad file descriptor 00:13:36.030 [2024-07-15 07:17:44.807318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:36.030 [2024-07-15 07:17:44.807345] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:36.030 [2024-07-15 07:17:44.807359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:36.030 request: 00:13:36.030 { 00:13:36.030 "name": "TLSTEST", 00:13:36.030 "trtype": "tcp", 00:13:36.030 "traddr": "10.0.0.2", 00:13:36.030 "adrfam": "ipv4", 00:13:36.030 "trsvcid": "4420", 00:13:36.030 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:36.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.030 "prchk_reftag": false, 00:13:36.030 "prchk_guard": false, 00:13:36.030 "hdgst": false, 00:13:36.030 "ddgst": false, 00:13:36.030 "psk": "/tmp/tmp.X2l3ElTpXW", 00:13:36.030 "method": "bdev_nvme_attach_controller", 00:13:36.030 "req_id": 1 00:13:36.030 } 00:13:36.030 Got JSON-RPC error response 00:13:36.030 response: 00:13:36.030 { 00:13:36.030 "code": -5, 00:13:36.030 "message": "Input/output error" 00:13:36.030 } 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73176 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73176 ']' 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73176 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73176 00:13:36.030 killing process with pid 73176 00:13:36.030 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.030 00:13:36.030 Latency(us) 00:13:36.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.030 =================================================================================================================== 00:13:36.030 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73176' 00:13:36.030 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73176 00:13:36.030 [2024-07-15 07:17:44.853908] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:36.031 07:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73176 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.289 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:36.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73198 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73198 /var/tmp/bdevperf.sock 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73198 ']' 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.290 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.290 [2024-07-15 07:17:45.073015] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:36.290 [2024-07-15 07:17:45.073400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73198 ] 00:13:36.290 [2024-07-15 07:17:45.208606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.548 [2024-07-15 07:17:45.267450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.548 [2024-07-15 07:17:45.297047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:36.548 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.548 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:36.548 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:36.808 [2024-07-15 07:17:45.573772] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:36.808 [2024-07-15 07:17:45.575439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf19c00 (9): Bad file descriptor 00:13:36.808 [2024-07-15 07:17:45.576431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:36.808 [2024-07-15 07:17:45.576456] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:36.808 [2024-07-15 07:17:45.576470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:36.808 request: 00:13:36.808 { 00:13:36.808 "name": "TLSTEST", 00:13:36.808 "trtype": "tcp", 00:13:36.808 "traddr": "10.0.0.2", 00:13:36.808 "adrfam": "ipv4", 00:13:36.808 "trsvcid": "4420", 00:13:36.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.808 "prchk_reftag": false, 00:13:36.808 "prchk_guard": false, 00:13:36.808 "hdgst": false, 00:13:36.808 "ddgst": false, 00:13:36.808 "method": "bdev_nvme_attach_controller", 00:13:36.808 "req_id": 1 00:13:36.808 } 00:13:36.808 Got JSON-RPC error response 00:13:36.808 response: 00:13:36.808 { 00:13:36.808 "code": -5, 00:13:36.808 "message": "Input/output error" 00:13:36.808 } 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73198 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73198 ']' 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73198 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73198 00:13:36.808 killing process with pid 73198 00:13:36.808 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.808 00:13:36.808 Latency(us) 00:13:36.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.808 =================================================================================================================== 00:13:36.808 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73198' 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73198 00:13:36.808 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73198 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72763 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72763 ']' 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72763 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72763 00:13:37.066 killing process with pid 72763 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72763' 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72763 00:13:37.066 [2024-07-15 07:17:45.798577] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72763 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:37.066 07:17:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.I6ECYVYnGY 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.I6ECYVYnGY 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:37.066 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73228 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73228 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73228 ']' 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.324 07:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.324 [2024-07-15 07:17:46.081438] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:37.324 [2024-07-15 07:17:46.081536] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.324 [2024-07-15 07:17:46.216955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.325 [2024-07-15 07:17:46.277379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.325 [2024-07-15 07:17:46.277437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.325 [2024-07-15 07:17:46.277449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.325 [2024-07-15 07:17:46.277458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.325 [2024-07-15 07:17:46.277465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.325 [2024-07-15 07:17:46.277490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.583 [2024-07-15 07:17:46.306700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.149 07:17:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.149 07:17:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:38.149 07:17:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.149 07:17:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:38.149 07:17:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.407 07:17:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.407 07:17:47 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.I6ECYVYnGY 00:13:38.407 07:17:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I6ECYVYnGY 00:13:38.407 07:17:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:38.665 [2024-07-15 07:17:47.397214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.665 07:17:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:38.924 07:17:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:39.182 [2024-07-15 07:17:48.005366] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:39.182 [2024-07-15 07:17:48.005578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.182 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:39.441 malloc0 00:13:39.441 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:39.699 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I6ECYVYnGY 00:13:39.957 [2024-07-15 07:17:48.836061] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I6ECYVYnGY 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.I6ECYVYnGY' 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73283 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73283 /var/tmp/bdevperf.sock 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73283 ']' 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.957 07:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.215 [2024-07-15 07:17:48.911715] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:40.215 [2024-07-15 07:17:48.912034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73283 ] 00:13:40.215 [2024-07-15 07:17:49.052152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.215 [2024-07-15 07:17:49.123581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.215 [2024-07-15 07:17:49.156405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.473 07:17:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.473 07:17:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:40.473 07:17:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I6ECYVYnGY 00:13:40.731 [2024-07-15 07:17:49.488153] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:40.731 [2024-07-15 07:17:49.488540] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:40.731 TLSTESTn1 00:13:40.731 07:17:49 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:40.989 Running I/O for 10 seconds... 00:13:50.960 00:13:50.960 Latency(us) 00:13:50.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.960 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:50.960 Verification LBA range: start 0x0 length 0x2000 00:13:50.960 TLSTESTn1 : 10.02 3851.72 15.05 0.00 0.00 33170.73 5481.19 33602.09 00:13:50.960 =================================================================================================================== 00:13:50.960 Total : 3851.72 15.05 0.00 0.00 33170.73 5481.19 33602.09 00:13:50.960 0 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73283 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73283 ']' 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73283 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73283 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:50.961 killing process with pid 73283 00:13:50.961 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.961 00:13:50.961 Latency(us) 00:13:50.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.961 =================================================================================================================== 00:13:50.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73283' 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73283 00:13:50.961 [2024-07-15 07:17:59.768164] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:50.961 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73283 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.I6ECYVYnGY 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I6ECYVYnGY 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I6ECYVYnGY 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I6ECYVYnGY 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.I6ECYVYnGY' 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73410 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73410 /var/tmp/bdevperf.sock 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73410 ']' 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.219 07:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.219 [2024-07-15 07:18:00.011492] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:51.219 [2024-07-15 07:18:00.011584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73410 ] 00:13:51.219 [2024-07-15 07:18:00.147428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.477 [2024-07-15 07:18:00.219812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.477 [2024-07-15 07:18:00.254380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:52.409 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.409 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:52.409 07:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I6ECYVYnGY 00:13:52.409 [2024-07-15 07:18:01.339299] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:52.409 [2024-07-15 07:18:01.339389] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:52.409 [2024-07-15 07:18:01.339403] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.I6ECYVYnGY 00:13:52.409 request: 00:13:52.409 { 00:13:52.409 "name": "TLSTEST", 00:13:52.409 "trtype": "tcp", 00:13:52.409 "traddr": "10.0.0.2", 00:13:52.409 "adrfam": "ipv4", 00:13:52.409 "trsvcid": "4420", 00:13:52.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.409 "prchk_reftag": false, 00:13:52.409 "prchk_guard": false, 00:13:52.409 "hdgst": false, 00:13:52.409 "ddgst": false, 00:13:52.409 "psk": "/tmp/tmp.I6ECYVYnGY", 00:13:52.409 "method": "bdev_nvme_attach_controller", 00:13:52.409 "req_id": 1 00:13:52.409 } 00:13:52.409 Got JSON-RPC error response 00:13:52.409 response: 00:13:52.409 { 00:13:52.409 "code": -1, 00:13:52.409 "message": "Operation not permitted" 00:13:52.409 } 00:13:52.409 07:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73410 00:13:52.409 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73410 ']' 00:13:52.409 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73410 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73410 00:13:52.667 killing process with pid 73410 00:13:52.667 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.667 00:13:52.667 Latency(us) 00:13:52.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.667 =================================================================================================================== 00:13:52.667 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73410' 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73410 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73410 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73228 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73228 ']' 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73228 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73228 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:52.667 killing process with pid 73228 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73228' 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73228 00:13:52.667 [2024-07-15 07:18:01.584969] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:52.667 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73228 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73445 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73445 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73445 ']' 00:13:52.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.926 07:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.926 [2024-07-15 07:18:01.816736] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:52.926 [2024-07-15 07:18:01.816835] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.183 [2024-07-15 07:18:01.953221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.183 [2024-07-15 07:18:02.014387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.183 [2024-07-15 07:18:02.014446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.183 [2024-07-15 07:18:02.014459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.183 [2024-07-15 07:18:02.014467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.183 [2024-07-15 07:18:02.014475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.183 [2024-07-15 07:18:02.014502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.183 [2024-07-15 07:18:02.045435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.I6ECYVYnGY 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.I6ECYVYnGY 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.I6ECYVYnGY 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I6ECYVYnGY 00:13:54.114 07:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:54.375 [2024-07-15 07:18:03.168106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.375 07:18:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.632 07:18:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:54.889 [2024-07-15 07:18:03.784203] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.889 [2024-07-15 07:18:03.784424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.889 07:18:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:55.146 malloc0 00:13:55.403 07:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:55.660 07:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I6ECYVYnGY 00:13:55.918 [2024-07-15 07:18:04.844440] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:55.918 [2024-07-15 07:18:04.844497] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:55.918 [2024-07-15 07:18:04.844534] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:55.918 request: 00:13:55.918 { 00:13:55.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.918 "host": "nqn.2016-06.io.spdk:host1", 00:13:55.918 "psk": "/tmp/tmp.I6ECYVYnGY", 00:13:55.918 "method": "nvmf_subsystem_add_host", 00:13:55.918 "req_id": 1 00:13:55.918 } 00:13:55.918 Got JSON-RPC error response 00:13:55.918 response: 00:13:55.918 { 00:13:55.918 "code": -32603, 00:13:55.918 "message": "Internal error" 00:13:55.918 } 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73445 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73445 ']' 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73445 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73445 00:13:56.175 killing process with pid 73445 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73445' 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73445 00:13:56.175 07:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73445 00:13:56.175 07:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.I6ECYVYnGY 00:13:56.175 07:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:56.175 07:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.175 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.175 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.175 07:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73511 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73511 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73511 ']' 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.176 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.435 [2024-07-15 07:18:05.180710] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:56.435 [2024-07-15 07:18:05.180872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.435 [2024-07-15 07:18:05.330785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.693 [2024-07-15 07:18:05.412650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.693 [2024-07-15 07:18:05.412726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.693 [2024-07-15 07:18:05.412741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.693 [2024-07-15 07:18:05.412751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.693 [2024-07-15 07:18:05.412760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.693 [2024-07-15 07:18:05.412794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.693 [2024-07-15 07:18:05.448022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.I6ECYVYnGY 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I6ECYVYnGY 00:13:56.693 07:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:56.951 [2024-07-15 07:18:05.809691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.951 07:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:57.521 07:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:57.780 [2024-07-15 07:18:06.493767] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:57.780 [2024-07-15 07:18:06.494051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.780 07:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:58.040 malloc0 00:13:58.040 07:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:58.298 07:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I6ECYVYnGY 00:13:58.557 [2024-07-15 07:18:07.345025] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:58.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73558 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73558 /var/tmp/bdevperf.sock 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73558 ']' 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.557 07:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.557 [2024-07-15 07:18:07.412312] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:13:58.557 [2024-07-15 07:18:07.412400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73558 ] 00:13:58.815 [2024-07-15 07:18:07.544472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.815 [2024-07-15 07:18:07.612701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.815 [2024-07-15 07:18:07.644814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:59.750 07:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.750 07:18:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:59.750 07:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I6ECYVYnGY 00:13:59.750 [2024-07-15 07:18:08.620601] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:59.750 [2024-07-15 07:18:08.620968] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:59.750 TLSTESTn1 00:14:00.008 07:18:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:00.267 07:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:00.267 "subsystems": [ 00:14:00.267 { 00:14:00.267 "subsystem": "keyring", 00:14:00.267 "config": [] 00:14:00.267 }, 00:14:00.267 { 00:14:00.267 "subsystem": "iobuf", 00:14:00.267 "config": [ 00:14:00.267 { 00:14:00.267 "method": "iobuf_set_options", 00:14:00.267 "params": { 00:14:00.267 "small_pool_count": 8192, 00:14:00.267 "large_pool_count": 1024, 00:14:00.267 "small_bufsize": 8192, 00:14:00.267 "large_bufsize": 135168 00:14:00.267 } 00:14:00.267 } 00:14:00.267 ] 00:14:00.267 }, 00:14:00.267 { 00:14:00.267 "subsystem": "sock", 00:14:00.267 "config": [ 00:14:00.267 { 00:14:00.267 "method": "sock_set_default_impl", 00:14:00.267 "params": { 00:14:00.267 "impl_name": "uring" 00:14:00.267 } 00:14:00.267 }, 00:14:00.267 { 00:14:00.267 "method": "sock_impl_set_options", 00:14:00.267 "params": { 00:14:00.267 "impl_name": "ssl", 00:14:00.267 "recv_buf_size": 4096, 00:14:00.267 "send_buf_size": 4096, 00:14:00.267 "enable_recv_pipe": true, 00:14:00.267 "enable_quickack": false, 00:14:00.267 "enable_placement_id": 0, 00:14:00.267 "enable_zerocopy_send_server": true, 00:14:00.267 "enable_zerocopy_send_client": false, 00:14:00.267 "zerocopy_threshold": 0, 00:14:00.267 "tls_version": 0, 00:14:00.267 "enable_ktls": false 00:14:00.267 } 00:14:00.267 }, 00:14:00.267 { 00:14:00.267 "method": "sock_impl_set_options", 00:14:00.267 "params": { 00:14:00.267 "impl_name": "posix", 00:14:00.267 "recv_buf_size": 2097152, 00:14:00.267 "send_buf_size": 2097152, 00:14:00.267 "enable_recv_pipe": true, 00:14:00.267 "enable_quickack": false, 00:14:00.267 "enable_placement_id": 0, 00:14:00.267 "enable_zerocopy_send_server": true, 00:14:00.267 "enable_zerocopy_send_client": false, 00:14:00.267 "zerocopy_threshold": 0, 00:14:00.267 "tls_version": 0, 00:14:00.267 "enable_ktls": false 00:14:00.267 } 00:14:00.267 }, 00:14:00.267 { 00:14:00.267 "method": "sock_impl_set_options", 00:14:00.267 "params": { 00:14:00.267 "impl_name": "uring", 00:14:00.267 "recv_buf_size": 2097152, 00:14:00.268 "send_buf_size": 2097152, 00:14:00.268 "enable_recv_pipe": true, 00:14:00.268 "enable_quickack": false, 00:14:00.268 "enable_placement_id": 0, 00:14:00.268 "enable_zerocopy_send_server": false, 00:14:00.268 "enable_zerocopy_send_client": false, 00:14:00.268 "zerocopy_threshold": 0, 00:14:00.268 "tls_version": 0, 00:14:00.268 "enable_ktls": false 00:14:00.268 } 00:14:00.268 } 00:14:00.268 ] 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "subsystem": "vmd", 00:14:00.268 "config": [] 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "subsystem": "accel", 00:14:00.268 "config": [ 00:14:00.268 { 00:14:00.268 "method": "accel_set_options", 00:14:00.268 "params": { 00:14:00.268 "small_cache_size": 128, 00:14:00.268 "large_cache_size": 16, 00:14:00.268 "task_count": 2048, 00:14:00.268 "sequence_count": 2048, 00:14:00.268 "buf_count": 2048 00:14:00.268 } 00:14:00.268 } 00:14:00.268 ] 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "subsystem": "bdev", 00:14:00.268 "config": [ 00:14:00.268 { 00:14:00.268 "method": "bdev_set_options", 00:14:00.268 "params": { 00:14:00.268 "bdev_io_pool_size": 65535, 00:14:00.268 "bdev_io_cache_size": 256, 00:14:00.268 "bdev_auto_examine": true, 00:14:00.268 "iobuf_small_cache_size": 128, 00:14:00.268 "iobuf_large_cache_size": 16 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "bdev_raid_set_options", 00:14:00.268 "params": { 00:14:00.268 "process_window_size_kb": 1024 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "bdev_iscsi_set_options", 00:14:00.268 "params": { 00:14:00.268 "timeout_sec": 30 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "bdev_nvme_set_options", 00:14:00.268 "params": { 00:14:00.268 "action_on_timeout": "none", 00:14:00.268 "timeout_us": 0, 00:14:00.268 "timeout_admin_us": 0, 00:14:00.268 "keep_alive_timeout_ms": 10000, 00:14:00.268 "arbitration_burst": 0, 00:14:00.268 "low_priority_weight": 0, 00:14:00.268 "medium_priority_weight": 0, 00:14:00.268 "high_priority_weight": 0, 00:14:00.268 "nvme_adminq_poll_period_us": 10000, 00:14:00.268 "nvme_ioq_poll_period_us": 0, 00:14:00.268 "io_queue_requests": 0, 00:14:00.268 "delay_cmd_submit": true, 00:14:00.268 "transport_retry_count": 4, 00:14:00.268 "bdev_retry_count": 3, 00:14:00.268 "transport_ack_timeout": 0, 00:14:00.268 "ctrlr_loss_timeout_sec": 0, 00:14:00.268 "reconnect_delay_sec": 0, 00:14:00.268 "fast_io_fail_timeout_sec": 0, 00:14:00.268 "disable_auto_failback": false, 00:14:00.268 "generate_uuids": false, 00:14:00.268 "transport_tos": 0, 00:14:00.268 "nvme_error_stat": false, 00:14:00.268 "rdma_srq_size": 0, 00:14:00.268 "io_path_stat": false, 00:14:00.268 "allow_accel_sequence": false, 00:14:00.268 "rdma_max_cq_size": 0, 00:14:00.268 "rdma_cm_event_timeout_ms": 0, 00:14:00.268 "dhchap_digests": [ 00:14:00.268 "sha256", 00:14:00.268 "sha384", 00:14:00.268 "sha512" 00:14:00.268 ], 00:14:00.268 "dhchap_dhgroups": [ 00:14:00.268 "null", 00:14:00.268 "ffdhe2048", 00:14:00.268 "ffdhe3072", 00:14:00.268 "ffdhe4096", 00:14:00.268 "ffdhe6144", 00:14:00.268 "ffdhe8192" 00:14:00.268 ] 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "bdev_nvme_set_hotplug", 00:14:00.268 "params": { 00:14:00.268 "period_us": 100000, 00:14:00.268 "enable": false 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "bdev_malloc_create", 00:14:00.268 "params": { 00:14:00.268 "name": "malloc0", 00:14:00.268 "num_blocks": 8192, 00:14:00.268 "block_size": 4096, 00:14:00.268 "physical_block_size": 4096, 00:14:00.268 "uuid": "f59d87d2-d86c-43ad-9d15-8e332625179d", 00:14:00.268 "optimal_io_boundary": 0 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "bdev_wait_for_examine" 00:14:00.268 } 00:14:00.268 ] 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "subsystem": "nbd", 00:14:00.268 "config": [] 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "subsystem": "scheduler", 00:14:00.268 "config": [ 00:14:00.268 { 00:14:00.268 "method": "framework_set_scheduler", 00:14:00.268 "params": { 00:14:00.268 "name": "static" 00:14:00.268 } 00:14:00.268 } 00:14:00.268 ] 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "subsystem": "nvmf", 00:14:00.268 "config": [ 00:14:00.268 { 00:14:00.268 "method": "nvmf_set_config", 00:14:00.268 "params": { 00:14:00.268 "discovery_filter": "match_any", 00:14:00.268 "admin_cmd_passthru": { 00:14:00.268 "identify_ctrlr": false 00:14:00.268 } 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "nvmf_set_max_subsystems", 00:14:00.268 "params": { 00:14:00.268 "max_subsystems": 1024 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "nvmf_set_crdt", 00:14:00.268 "params": { 00:14:00.268 "crdt1": 0, 00:14:00.268 "crdt2": 0, 00:14:00.268 "crdt3": 0 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "nvmf_create_transport", 00:14:00.268 "params": { 00:14:00.268 "trtype": "TCP", 00:14:00.268 "max_queue_depth": 128, 00:14:00.268 "max_io_qpairs_per_ctrlr": 127, 00:14:00.268 "in_capsule_data_size": 4096, 00:14:00.268 "max_io_size": 131072, 00:14:00.268 "io_unit_size": 131072, 00:14:00.268 "max_aq_depth": 128, 00:14:00.268 "num_shared_buffers": 511, 00:14:00.268 "buf_cache_size": 4294967295, 00:14:00.268 "dif_insert_or_strip": false, 00:14:00.268 "zcopy": false, 00:14:00.268 "c2h_success": false, 00:14:00.268 "sock_priority": 0, 00:14:00.268 "abort_timeout_sec": 1, 00:14:00.268 "ack_timeout": 0, 00:14:00.268 "data_wr_pool_size": 0 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "nvmf_create_subsystem", 00:14:00.268 "params": { 00:14:00.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.268 "allow_any_host": false, 00:14:00.268 "serial_number": "SPDK00000000000001", 00:14:00.268 "model_number": "SPDK bdev Controller", 00:14:00.268 "max_namespaces": 10, 00:14:00.268 "min_cntlid": 1, 00:14:00.268 "max_cntlid": 65519, 00:14:00.268 "ana_reporting": false 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "nvmf_subsystem_add_host", 00:14:00.268 "params": { 00:14:00.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.268 "host": "nqn.2016-06.io.spdk:host1", 00:14:00.268 "psk": "/tmp/tmp.I6ECYVYnGY" 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "nvmf_subsystem_add_ns", 00:14:00.268 "params": { 00:14:00.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.268 "namespace": { 00:14:00.268 "nsid": 1, 00:14:00.268 "bdev_name": "malloc0", 00:14:00.268 "nguid": "F59D87D2D86C43AD9D158E332625179D", 00:14:00.268 "uuid": "f59d87d2-d86c-43ad-9d15-8e332625179d", 00:14:00.268 "no_auto_visible": false 00:14:00.268 } 00:14:00.268 } 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "method": "nvmf_subsystem_add_listener", 00:14:00.268 "params": { 00:14:00.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.268 "listen_address": { 00:14:00.268 "trtype": "TCP", 00:14:00.268 "adrfam": "IPv4", 00:14:00.268 "traddr": "10.0.0.2", 00:14:00.268 "trsvcid": "4420" 00:14:00.268 }, 00:14:00.268 "secure_channel": true 00:14:00.268 } 00:14:00.268 } 00:14:00.268 ] 00:14:00.268 } 00:14:00.268 ] 00:14:00.268 }' 00:14:00.268 07:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:00.528 07:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:00.528 "subsystems": [ 00:14:00.528 { 00:14:00.528 "subsystem": "keyring", 00:14:00.528 "config": [] 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "subsystem": "iobuf", 00:14:00.528 "config": [ 00:14:00.528 { 00:14:00.528 "method": "iobuf_set_options", 00:14:00.528 "params": { 00:14:00.528 "small_pool_count": 8192, 00:14:00.528 "large_pool_count": 1024, 00:14:00.528 "small_bufsize": 8192, 00:14:00.528 "large_bufsize": 135168 00:14:00.528 } 00:14:00.528 } 00:14:00.528 ] 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "subsystem": "sock", 00:14:00.528 "config": [ 00:14:00.528 { 00:14:00.528 "method": "sock_set_default_impl", 00:14:00.528 "params": { 00:14:00.528 "impl_name": "uring" 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "sock_impl_set_options", 00:14:00.528 "params": { 00:14:00.528 "impl_name": "ssl", 00:14:00.528 "recv_buf_size": 4096, 00:14:00.528 "send_buf_size": 4096, 00:14:00.528 "enable_recv_pipe": true, 00:14:00.528 "enable_quickack": false, 00:14:00.528 "enable_placement_id": 0, 00:14:00.528 "enable_zerocopy_send_server": true, 00:14:00.528 "enable_zerocopy_send_client": false, 00:14:00.528 "zerocopy_threshold": 0, 00:14:00.528 "tls_version": 0, 00:14:00.528 "enable_ktls": false 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "sock_impl_set_options", 00:14:00.528 "params": { 00:14:00.528 "impl_name": "posix", 00:14:00.528 "recv_buf_size": 2097152, 00:14:00.528 "send_buf_size": 2097152, 00:14:00.528 "enable_recv_pipe": true, 00:14:00.528 "enable_quickack": false, 00:14:00.528 "enable_placement_id": 0, 00:14:00.528 "enable_zerocopy_send_server": true, 00:14:00.528 "enable_zerocopy_send_client": false, 00:14:00.528 "zerocopy_threshold": 0, 00:14:00.528 "tls_version": 0, 00:14:00.528 "enable_ktls": false 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "sock_impl_set_options", 00:14:00.528 "params": { 00:14:00.528 "impl_name": "uring", 00:14:00.528 "recv_buf_size": 2097152, 00:14:00.528 "send_buf_size": 2097152, 00:14:00.528 "enable_recv_pipe": true, 00:14:00.528 "enable_quickack": false, 00:14:00.528 "enable_placement_id": 0, 00:14:00.528 "enable_zerocopy_send_server": false, 00:14:00.528 "enable_zerocopy_send_client": false, 00:14:00.528 "zerocopy_threshold": 0, 00:14:00.528 "tls_version": 0, 00:14:00.528 "enable_ktls": false 00:14:00.528 } 00:14:00.528 } 00:14:00.528 ] 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "subsystem": "vmd", 00:14:00.528 "config": [] 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "subsystem": "accel", 00:14:00.528 "config": [ 00:14:00.528 { 00:14:00.528 "method": "accel_set_options", 00:14:00.528 "params": { 00:14:00.528 "small_cache_size": 128, 00:14:00.528 "large_cache_size": 16, 00:14:00.528 "task_count": 2048, 00:14:00.528 "sequence_count": 2048, 00:14:00.528 "buf_count": 2048 00:14:00.528 } 00:14:00.528 } 00:14:00.528 ] 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "subsystem": "bdev", 00:14:00.528 "config": [ 00:14:00.528 { 00:14:00.528 "method": "bdev_set_options", 00:14:00.528 "params": { 00:14:00.528 "bdev_io_pool_size": 65535, 00:14:00.528 "bdev_io_cache_size": 256, 00:14:00.528 "bdev_auto_examine": true, 00:14:00.528 "iobuf_small_cache_size": 128, 00:14:00.528 "iobuf_large_cache_size": 16 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "bdev_raid_set_options", 00:14:00.528 "params": { 00:14:00.528 "process_window_size_kb": 1024 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "bdev_iscsi_set_options", 00:14:00.528 "params": { 00:14:00.528 "timeout_sec": 30 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "bdev_nvme_set_options", 00:14:00.528 "params": { 00:14:00.528 "action_on_timeout": "none", 00:14:00.528 "timeout_us": 0, 00:14:00.528 "timeout_admin_us": 0, 00:14:00.528 "keep_alive_timeout_ms": 10000, 00:14:00.528 "arbitration_burst": 0, 00:14:00.528 "low_priority_weight": 0, 00:14:00.528 "medium_priority_weight": 0, 00:14:00.528 "high_priority_weight": 0, 00:14:00.528 "nvme_adminq_poll_period_us": 10000, 00:14:00.528 "nvme_ioq_poll_period_us": 0, 00:14:00.528 "io_queue_requests": 512, 00:14:00.528 "delay_cmd_submit": true, 00:14:00.528 "transport_retry_count": 4, 00:14:00.528 "bdev_retry_count": 3, 00:14:00.528 "transport_ack_timeout": 0, 00:14:00.528 "ctrlr_loss_timeout_sec": 0, 00:14:00.528 "reconnect_delay_sec": 0, 00:14:00.528 "fast_io_fail_timeout_sec": 0, 00:14:00.528 "disable_auto_failback": false, 00:14:00.528 "generate_uuids": false, 00:14:00.528 "transport_tos": 0, 00:14:00.528 "nvme_error_stat": false, 00:14:00.528 "rdma_srq_size": 0, 00:14:00.528 "io_path_stat": false, 00:14:00.528 "allow_accel_sequence": false, 00:14:00.528 "rdma_max_cq_size": 0, 00:14:00.528 "rdma_cm_event_timeout_ms": 0, 00:14:00.528 "dhchap_digests": [ 00:14:00.528 "sha256", 00:14:00.528 "sha384", 00:14:00.528 "sha512" 00:14:00.528 ], 00:14:00.528 "dhchap_dhgroups": [ 00:14:00.528 "null", 00:14:00.528 "ffdhe2048", 00:14:00.528 "ffdhe3072", 00:14:00.528 "ffdhe4096", 00:14:00.528 "ffdhe6144", 00:14:00.528 "ffdhe8192" 00:14:00.528 ] 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "bdev_nvme_attach_controller", 00:14:00.528 "params": { 00:14:00.528 "name": "TLSTEST", 00:14:00.528 "trtype": "TCP", 00:14:00.528 "adrfam": "IPv4", 00:14:00.528 "traddr": "10.0.0.2", 00:14:00.528 "trsvcid": "4420", 00:14:00.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.528 "prchk_reftag": false, 00:14:00.528 "prchk_guard": false, 00:14:00.528 "ctrlr_loss_timeout_sec": 0, 00:14:00.528 "reconnect_delay_sec": 0, 00:14:00.528 "fast_io_fail_timeout_sec": 0, 00:14:00.528 "psk": "/tmp/tmp.I6ECYVYnGY", 00:14:00.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.528 "hdgst": false, 00:14:00.528 "ddgst": false 00:14:00.528 } 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "method": "bdev_nvme_set_hotplug", 00:14:00.528 "params": { 00:14:00.528 "period_us": 100000, 00:14:00.528 "enable": false 00:14:00.528 } 00:14:00.528 }, 00:14:00.529 { 00:14:00.529 "method": "bdev_wait_for_examine" 00:14:00.529 } 00:14:00.529 ] 00:14:00.529 }, 00:14:00.529 { 00:14:00.529 "subsystem": "nbd", 00:14:00.529 "config": [] 00:14:00.529 } 00:14:00.529 ] 00:14:00.529 }' 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73558 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73558 ']' 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73558 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73558 00:14:00.529 killing process with pid 73558 00:14:00.529 Received shutdown signal, test time was about 10.000000 seconds 00:14:00.529 00:14:00.529 Latency(us) 00:14:00.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.529 =================================================================================================================== 00:14:00.529 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73558' 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73558 00:14:00.529 [2024-07-15 07:18:09.452098] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:00.529 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73558 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73511 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73511 ']' 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73511 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73511 00:14:00.787 killing process with pid 73511 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73511' 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73511 00:14:00.787 [2024-07-15 07:18:09.633992] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:00.787 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73511 00:14:01.045 07:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:01.045 07:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.045 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.045 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.045 07:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:01.045 "subsystems": [ 00:14:01.045 { 00:14:01.045 "subsystem": "keyring", 00:14:01.045 "config": [] 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "subsystem": "iobuf", 00:14:01.045 "config": [ 00:14:01.045 { 00:14:01.045 "method": "iobuf_set_options", 00:14:01.045 "params": { 00:14:01.045 "small_pool_count": 8192, 00:14:01.045 "large_pool_count": 1024, 00:14:01.045 "small_bufsize": 8192, 00:14:01.045 "large_bufsize": 135168 00:14:01.045 } 00:14:01.045 } 00:14:01.045 ] 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "subsystem": "sock", 00:14:01.045 "config": [ 00:14:01.045 { 00:14:01.045 "method": "sock_set_default_impl", 00:14:01.045 "params": { 00:14:01.045 "impl_name": "uring" 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "sock_impl_set_options", 00:14:01.045 "params": { 00:14:01.045 "impl_name": "ssl", 00:14:01.045 "recv_buf_size": 4096, 00:14:01.045 "send_buf_size": 4096, 00:14:01.045 "enable_recv_pipe": true, 00:14:01.045 "enable_quickack": false, 00:14:01.045 "enable_placement_id": 0, 00:14:01.045 "enable_zerocopy_send_server": true, 00:14:01.045 "enable_zerocopy_send_client": false, 00:14:01.045 "zerocopy_threshold": 0, 00:14:01.045 "tls_version": 0, 00:14:01.045 "enable_ktls": false 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "sock_impl_set_options", 00:14:01.045 "params": { 00:14:01.045 "impl_name": "posix", 00:14:01.045 "recv_buf_size": 2097152, 00:14:01.045 "send_buf_size": 2097152, 00:14:01.045 "enable_recv_pipe": true, 00:14:01.045 "enable_quickack": false, 00:14:01.045 "enable_placement_id": 0, 00:14:01.045 "enable_zerocopy_send_server": true, 00:14:01.045 "enable_zerocopy_send_client": false, 00:14:01.045 "zerocopy_threshold": 0, 00:14:01.045 "tls_version": 0, 00:14:01.045 "enable_ktls": false 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "sock_impl_set_options", 00:14:01.045 "params": { 00:14:01.045 "impl_name": "uring", 00:14:01.045 "recv_buf_size": 2097152, 00:14:01.045 "send_buf_size": 2097152, 00:14:01.045 "enable_recv_pipe": true, 00:14:01.045 "enable_quickack": false, 00:14:01.045 "enable_placement_id": 0, 00:14:01.045 "enable_zerocopy_send_server": false, 00:14:01.045 "enable_zerocopy_send_client": false, 00:14:01.045 "zerocopy_threshold": 0, 00:14:01.045 "tls_version": 0, 00:14:01.045 "enable_ktls": false 00:14:01.045 } 00:14:01.045 } 00:14:01.045 ] 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "subsystem": "vmd", 00:14:01.045 "config": [] 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "subsystem": "accel", 00:14:01.045 "config": [ 00:14:01.045 { 00:14:01.045 "method": "accel_set_options", 00:14:01.045 "params": { 00:14:01.045 "small_cache_size": 128, 00:14:01.045 "large_cache_size": 16, 00:14:01.045 "task_count": 2048, 00:14:01.045 "sequence_count": 2048, 00:14:01.045 "buf_count": 2048 00:14:01.045 } 00:14:01.045 } 00:14:01.045 ] 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "subsystem": "bdev", 00:14:01.045 "config": [ 00:14:01.045 { 00:14:01.045 "method": "bdev_set_options", 00:14:01.045 "params": { 00:14:01.045 "bdev_io_pool_size": 65535, 00:14:01.045 "bdev_io_cache_size": 256, 00:14:01.045 "bdev_auto_examine": true, 00:14:01.045 "iobuf_small_cache_size": 128, 00:14:01.045 "iobuf_large_cache_size": 16 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "bdev_raid_set_options", 00:14:01.045 "params": { 00:14:01.045 "process_window_size_kb": 1024 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "bdev_iscsi_set_options", 00:14:01.045 "params": { 00:14:01.045 "timeout_sec": 30 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "bdev_nvme_set_options", 00:14:01.045 "params": { 00:14:01.045 "action_on_timeout": "none", 00:14:01.045 "timeout_us": 0, 00:14:01.045 "timeout_admin_us": 0, 00:14:01.045 "keep_alive_timeout_ms": 10000, 00:14:01.045 "arbitration_burst": 0, 00:14:01.045 "low_priority_weight": 0, 00:14:01.045 "medium_priority_weight": 0, 00:14:01.045 "high_priority_weight": 0, 00:14:01.045 "nvme_adminq_poll_period_us": 10000, 00:14:01.045 "nvme_ioq_poll_period_us": 0, 00:14:01.045 "io_queue_requests": 0, 00:14:01.045 "delay_cmd_submit": true, 00:14:01.045 "transport_retry_count": 4, 00:14:01.045 "bdev_retry_count": 3, 00:14:01.045 "transport_ack_timeout": 0, 00:14:01.045 "ctrlr_loss_timeout_sec": 0, 00:14:01.045 "reconnect_delay_sec": 0, 00:14:01.045 "fast_io_fail_timeout_sec": 0, 00:14:01.045 "disable_auto_failback": false, 00:14:01.045 "generate_uuids": false, 00:14:01.045 "transport_tos": 0, 00:14:01.045 "nvme_error_stat": false, 00:14:01.045 "rdma_srq_size": 0, 00:14:01.045 "io_path_stat": false, 00:14:01.045 "allow_accel_sequence": false, 00:14:01.045 "rdma_max_cq_size": 0, 00:14:01.045 "rdma_cm_event_timeout_ms": 0, 00:14:01.045 "dhchap_digests": [ 00:14:01.045 "sha256", 00:14:01.045 "sha384", 00:14:01.045 "sha512" 00:14:01.045 ], 00:14:01.045 "dhchap_dhgroups": [ 00:14:01.045 "null", 00:14:01.045 "ffdhe2048", 00:14:01.045 "ffdhe3072", 00:14:01.045 "ffdhe4096", 00:14:01.045 "ffdhe6144", 00:14:01.045 "ffdhe8192" 00:14:01.045 ] 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "bdev_nvme_set_hotplug", 00:14:01.045 "params": { 00:14:01.045 "period_us": 100000, 00:14:01.045 "enable": false 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "bdev_malloc_create", 00:14:01.045 "params": { 00:14:01.045 "name": "malloc0", 00:14:01.045 "num_blocks": 8192, 00:14:01.045 "block_size": 4096, 00:14:01.045 "physical_block_size": 4096, 00:14:01.045 "uuid": "f59d87d2-d86c-43ad-9d15-8e332625179d", 00:14:01.045 "optimal_io_boundary": 0 00:14:01.045 } 00:14:01.045 }, 00:14:01.045 { 00:14:01.045 "method": "bdev_wait_for_examine" 00:14:01.045 } 00:14:01.045 ] 00:14:01.045 }, 00:14:01.046 { 00:14:01.046 "subsystem": "nbd", 00:14:01.046 "config": [] 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "subsystem": "scheduler", 00:14:01.046 "config": [ 00:14:01.046 { 00:14:01.046 "method": "framework_set_scheduler", 00:14:01.046 "params": { 00:14:01.046 "name": "static" 00:14:01.046 } 00:14:01.046 } 00:14:01.046 ] 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "subsystem": "nvmf", 00:14:01.046 "config": [ 00:14:01.046 { 00:14:01.046 "method": "nvmf_set_config", 00:14:01.046 "params": { 00:14:01.046 "discovery_filter": "match_any", 00:14:01.046 "admin_cmd_passthru": { 00:14:01.046 "identify_ctrlr": false 00:14:01.046 } 00:14:01.046 } 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "method": "nvmf_set_max_subsystems", 00:14:01.046 "params": { 00:14:01.046 "max_subsystems": 1024 00:14:01.046 } 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "method": "nvmf_set_crdt", 00:14:01.046 "params": { 00:14:01.046 "crdt1": 0, 00:14:01.046 "crdt2": 0, 00:14:01.046 "crdt3": 0 00:14:01.046 } 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "method": "nvmf_create_transport", 00:14:01.046 "params": { 00:14:01.046 "trtype": "TCP", 00:14:01.046 "max_queue_depth": 128, 00:14:01.046 "max_io_qpairs_per_ctrlr": 127, 00:14:01.046 "in_capsule_data_size": 4096, 00:14:01.046 "max_io_size": 131072, 00:14:01.046 "io_unit_size": 131072, 00:14:01.046 "max_aq_depth": 128, 00:14:01.046 "num_shared_buffers": 511, 00:14:01.046 "buf_cache_size": 4294967295, 00:14:01.046 "dif_insert_or_strip": false, 00:14:01.046 "zcopy": false, 00:14:01.046 "c2h_success": false, 00:14:01.046 "sock_priority": 0, 00:14:01.046 "abort_timeout_sec": 1, 00:14:01.046 "ack_timeout": 0, 00:14:01.046 "data_wr_pool_size": 0 00:14:01.046 } 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "method": "nvmf_create_subsystem", 00:14:01.046 "params": { 00:14:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.046 "allow_any_host": false, 00:14:01.046 "serial_number": "SPDK00000000000001", 00:14:01.046 "model_number": "SPDK bdev Controller", 00:14:01.046 "max_namespaces": 10, 00:14:01.046 "min_cntlid": 1, 00:14:01.046 "max_cntlid": 65519, 00:14:01.046 "ana_reporting": false 00:14:01.046 } 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "method": "nvmf_subsystem_add_host", 00:14:01.046 "params": { 00:14:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.046 "host": "nqn.2016-06.io.spdk:host1", 00:14:01.046 "psk": "/tmp/tmp.I6ECYVYnGY" 00:14:01.046 } 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "method": "nvmf_subsystem_add_ns", 00:14:01.046 "params": { 00:14:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.046 "namespace": { 00:14:01.046 "nsid": 1, 00:14:01.046 "bdev_name": "malloc0", 00:14:01.046 "nguid": "F59D87D2D86C43AD9D158E332625179D", 00:14:01.046 "uuid": "f59d87d2-d86c-43ad-9d15-8e332625179d", 00:14:01.046 "no_auto_visible": false 00:14:01.046 } 00:14:01.046 } 00:14:01.046 }, 00:14:01.046 { 00:14:01.046 "method": "nvmf_subsystem_add_listener", 00:14:01.046 "params": { 00:14:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.046 "listen_address": { 00:14:01.046 "trtype": "TCP", 00:14:01.046 "adrfam": "IPv4", 00:14:01.046 "traddr": "10.0.0.2", 00:14:01.046 "trsvcid": "4420" 00:14:01.046 }, 00:14:01.046 "secure_channel": true 00:14:01.046 } 00:14:01.046 } 00:14:01.046 ] 00:14:01.046 } 00:14:01.046 ] 00:14:01.046 }' 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73607 00:14:01.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73607 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73607 ']' 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.046 07:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.046 [2024-07-15 07:18:09.858793] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:01.046 [2024-07-15 07:18:09.858903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.305 [2024-07-15 07:18:10.003777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.305 [2024-07-15 07:18:10.063064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.305 [2024-07-15 07:18:10.063130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.305 [2024-07-15 07:18:10.063143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.305 [2024-07-15 07:18:10.063151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.305 [2024-07-15 07:18:10.063158] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.305 [2024-07-15 07:18:10.063238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.305 [2024-07-15 07:18:10.205674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.305 [2024-07-15 07:18:10.253676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.564 [2024-07-15 07:18:10.269577] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:01.564 [2024-07-15 07:18:10.285581] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.564 [2024-07-15 07:18:10.285764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73639 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73639 /var/tmp/bdevperf.sock 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73639 ']' 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:02.132 07:18:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:02.132 "subsystems": [ 00:14:02.132 { 00:14:02.132 "subsystem": "keyring", 00:14:02.132 "config": [] 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "subsystem": "iobuf", 00:14:02.132 "config": [ 00:14:02.132 { 00:14:02.132 "method": "iobuf_set_options", 00:14:02.132 "params": { 00:14:02.132 "small_pool_count": 8192, 00:14:02.132 "large_pool_count": 1024, 00:14:02.132 "small_bufsize": 8192, 00:14:02.132 "large_bufsize": 135168 00:14:02.132 } 00:14:02.132 } 00:14:02.132 ] 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "subsystem": "sock", 00:14:02.132 "config": [ 00:14:02.132 { 00:14:02.132 "method": "sock_set_default_impl", 00:14:02.132 "params": { 00:14:02.132 "impl_name": "uring" 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "sock_impl_set_options", 00:14:02.132 "params": { 00:14:02.132 "impl_name": "ssl", 00:14:02.132 "recv_buf_size": 4096, 00:14:02.132 "send_buf_size": 4096, 00:14:02.132 "enable_recv_pipe": true, 00:14:02.132 "enable_quickack": false, 00:14:02.132 "enable_placement_id": 0, 00:14:02.132 "enable_zerocopy_send_server": true, 00:14:02.132 "enable_zerocopy_send_client": false, 00:14:02.132 "zerocopy_threshold": 0, 00:14:02.132 "tls_version": 0, 00:14:02.132 "enable_ktls": false 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "sock_impl_set_options", 00:14:02.132 "params": { 00:14:02.132 "impl_name": "posix", 00:14:02.132 "recv_buf_size": 2097152, 00:14:02.132 "send_buf_size": 2097152, 00:14:02.132 "enable_recv_pipe": true, 00:14:02.132 "enable_quickack": false, 00:14:02.132 "enable_placement_id": 0, 00:14:02.132 "enable_zerocopy_send_server": true, 00:14:02.132 "enable_zerocopy_send_client": false, 00:14:02.132 "zerocopy_threshold": 0, 00:14:02.132 "tls_version": 0, 00:14:02.132 "enable_ktls": false 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "sock_impl_set_options", 00:14:02.132 "params": { 00:14:02.132 "impl_name": "uring", 00:14:02.132 "recv_buf_size": 2097152, 00:14:02.132 "send_buf_size": 2097152, 00:14:02.132 "enable_recv_pipe": true, 00:14:02.132 "enable_quickack": false, 00:14:02.132 "enable_placement_id": 0, 00:14:02.132 "enable_zerocopy_send_server": false, 00:14:02.132 "enable_zerocopy_send_client": false, 00:14:02.132 "zerocopy_threshold": 0, 00:14:02.132 "tls_version": 0, 00:14:02.132 "enable_ktls": false 00:14:02.132 } 00:14:02.132 } 00:14:02.132 ] 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "subsystem": "vmd", 00:14:02.132 "config": [] 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "subsystem": "accel", 00:14:02.132 "config": [ 00:14:02.132 { 00:14:02.132 "method": "accel_set_options", 00:14:02.132 "params": { 00:14:02.132 "small_cache_size": 128, 00:14:02.132 "large_cache_size": 16, 00:14:02.132 "task_count": 2048, 00:14:02.132 "sequence_count": 2048, 00:14:02.132 "buf_count": 2048 00:14:02.132 } 00:14:02.132 } 00:14:02.132 ] 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "subsystem": "bdev", 00:14:02.132 "config": [ 00:14:02.132 { 00:14:02.132 "method": "bdev_set_options", 00:14:02.132 "params": { 00:14:02.132 "bdev_io_pool_size": 65535, 00:14:02.132 "bdev_io_cache_size": 256, 00:14:02.132 "bdev_auto_examine": true, 00:14:02.132 "iobuf_small_cache_size": 128, 00:14:02.132 "iobuf_large_cache_size": 16 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "bdev_raid_set_options", 00:14:02.132 "params": { 00:14:02.132 "process_window_size_kb": 1024 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "bdev_iscsi_set_options", 00:14:02.132 "params": { 00:14:02.132 "timeout_sec": 30 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "bdev_nvme_set_options", 00:14:02.132 "params": { 00:14:02.132 "action_on_timeout": "none", 00:14:02.132 "timeout_us": 0, 00:14:02.132 "timeout_admin_us": 0, 00:14:02.132 "keep_alive_timeout_ms": 10000, 00:14:02.132 "arbitration_burst": 0, 00:14:02.132 "low_priority_weight": 0, 00:14:02.132 "medium_priority_weight": 0, 00:14:02.132 "high_priority_weight": 0, 00:14:02.132 "nvme_adminq_poll_period_us": 10000, 00:14:02.132 "nvme_ioq_poll_period_us": 0, 00:14:02.132 "io_queue_requests": 512, 00:14:02.132 "delay_cmd_submit": true, 00:14:02.132 "transport_retry_count": 4, 00:14:02.132 "bdev_retry_count": 3, 00:14:02.132 "transport_ack_timeout": 0, 00:14:02.132 "ctrlr_loss_timeout_sec": 0, 00:14:02.132 "reconnect_delay_sec": 0, 00:14:02.132 "fast_io_fail_timeout_sec": 0, 00:14:02.132 "disable_auto_failback": false, 00:14:02.132 "generate_uuids": false, 00:14:02.132 "transport_tos": 0, 00:14:02.132 "nvme_error_stat": false, 00:14:02.132 "rdma_srq_size": 0, 00:14:02.132 "io_path_stat": false, 00:14:02.132 "allow_accel_sequence": false, 00:14:02.132 "rdma_max_cq_size": 0, 00:14:02.132 "rdma_cm_event_timeout_ms": 0, 00:14:02.132 "dhchap_digests": [ 00:14:02.132 "sha256", 00:14:02.132 "sha384", 00:14:02.132 "sha512" 00:14:02.132 ], 00:14:02.132 "dhchap_dhgroups": [ 00:14:02.132 "null", 00:14:02.132 "ffdhe2048", 00:14:02.132 "ffdhe3072", 00:14:02.132 "ffdhe4096", 00:14:02.132 "ffdhe6144", 00:14:02.132 "ffdhe8192" 00:14:02.132 ] 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "bdev_nvme_attach_controller", 00:14:02.132 "params": { 00:14:02.132 "name": "TLSTEST", 00:14:02.132 "trtype": "TCP", 00:14:02.132 "adrfam": "IPv4", 00:14:02.132 "traddr": "10.0.0.2", 00:14:02.132 "trsvcid": "4420", 00:14:02.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.132 "prchk_reftag": false, 00:14:02.132 "prchk_guard": false, 00:14:02.132 "ctrlr_loss_timeout_sec": 0, 00:14:02.132 "reconnect_delay_sec": 0, 00:14:02.132 "fast_io_fail_timeout_sec": 0, 00:14:02.132 "psk": "/tmp/tmp.I6ECYVYnGY", 00:14:02.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.132 "hdgst": false, 00:14:02.132 "ddgst": false 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "bdev_nvme_set_hotplug", 00:14:02.132 "params": { 00:14:02.132 "period_us": 100000, 00:14:02.132 "enable": false 00:14:02.132 } 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "method": "bdev_wait_for_examine" 00:14:02.132 } 00:14:02.132 ] 00:14:02.132 }, 00:14:02.132 { 00:14:02.132 "subsystem": "nbd", 00:14:02.132 "config": [] 00:14:02.132 } 00:14:02.132 ] 00:14:02.132 }' 00:14:02.132 [2024-07-15 07:18:10.954123] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:02.133 [2024-07-15 07:18:10.954207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73639 ] 00:14:02.391 [2024-07-15 07:18:11.094572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.391 [2024-07-15 07:18:11.162840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.391 [2024-07-15 07:18:11.276783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.391 [2024-07-15 07:18:11.301557] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.391 [2024-07-15 07:18:11.301671] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:03.326 07:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.326 07:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:03.326 07:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:03.326 Running I/O for 10 seconds... 00:14:13.301 00:14:13.301 Latency(us) 00:14:13.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.301 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.301 Verification LBA range: start 0x0 length 0x2000 00:14:13.301 TLSTESTn1 : 10.01 3910.67 15.28 0.00 0.00 32672.62 6196.13 25737.77 00:14:13.301 =================================================================================================================== 00:14:13.301 Total : 3910.67 15.28 0.00 0.00 32672.62 6196.13 25737.77 00:14:13.301 0 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73639 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73639 ']' 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73639 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73639 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:13.301 killing process with pid 73639 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73639' 00:14:13.301 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.301 00:14:13.301 Latency(us) 00:14:13.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.301 =================================================================================================================== 00:14:13.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73639 00:14:13.301 [2024-07-15 07:18:22.138871] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:13.301 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73639 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73607 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73607 ']' 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73607 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73607 00:14:13.559 killing process with pid 73607 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73607' 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73607 00:14:13.559 [2024-07-15 07:18:22.318049] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73607 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73772 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73772 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73772 ']' 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.559 07:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.817 [2024-07-15 07:18:22.533379] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:13.817 [2024-07-15 07:18:22.533464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.817 [2024-07-15 07:18:22.666835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.817 [2024-07-15 07:18:22.723827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.817 [2024-07-15 07:18:22.723880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.817 [2024-07-15 07:18:22.723892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.817 [2024-07-15 07:18:22.723901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.817 [2024-07-15 07:18:22.723908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.817 [2024-07-15 07:18:22.723938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.817 [2024-07-15 07:18:22.752058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.I6ECYVYnGY 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.I6ECYVYnGY 00:14:14.749 07:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:15.006 [2024-07-15 07:18:23.769513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.006 07:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:15.263 07:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:15.520 [2024-07-15 07:18:24.245587] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:15.520 [2024-07-15 07:18:24.245803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.520 07:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:15.777 malloc0 00:14:15.777 07:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:16.034 07:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I6ECYVYnGY 00:14:16.291 [2024-07-15 07:18:25.088337] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73832 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:16.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73832 /var/tmp/bdevperf.sock 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73832 ']' 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.291 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.291 [2024-07-15 07:18:25.174654] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:16.291 [2024-07-15 07:18:25.175023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73832 ] 00:14:16.552 [2024-07-15 07:18:25.325685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.552 [2024-07-15 07:18:25.396571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.552 [2024-07-15 07:18:25.429264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.552 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.552 07:18:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:16.552 07:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.I6ECYVYnGY 00:14:16.811 07:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:17.069 [2024-07-15 07:18:25.969293] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:17.328 nvme0n1 00:14:17.328 07:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:17.328 Running I/O for 1 seconds... 00:14:18.703 00:14:18.703 Latency(us) 00:14:18.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.703 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:18.703 Verification LBA range: start 0x0 length 0x2000 00:14:18.703 nvme0n1 : 1.03 3829.62 14.96 0.00 0.00 32976.81 7238.75 20494.89 00:14:18.703 =================================================================================================================== 00:14:18.703 Total : 3829.62 14.96 0.00 0.00 32976.81 7238.75 20494.89 00:14:18.703 0 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73832 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73832 ']' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73832 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73832 00:14:18.703 killing process with pid 73832 00:14:18.703 Received shutdown signal, test time was about 1.000000 seconds 00:14:18.703 00:14:18.703 Latency(us) 00:14:18.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.703 =================================================================================================================== 00:14:18.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73832' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73832 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73832 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73772 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73772 ']' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73772 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73772 00:14:18.703 killing process with pid 73772 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73772' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73772 00:14:18.703 [2024-07-15 07:18:27.452571] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73772 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73870 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73870 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73870 ']' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.703 07:18:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.962 [2024-07-15 07:18:27.677298] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:18.962 [2024-07-15 07:18:27.677401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.962 [2024-07-15 07:18:27.815902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.962 [2024-07-15 07:18:27.872939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.962 [2024-07-15 07:18:27.872994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.962 [2024-07-15 07:18:27.873006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.962 [2024-07-15 07:18:27.873014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.962 [2024-07-15 07:18:27.873021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.962 [2024-07-15 07:18:27.873051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.962 [2024-07-15 07:18:27.901162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.897 [2024-07-15 07:18:28.682576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.897 malloc0 00:14:19.897 [2024-07-15 07:18:28.708937] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.897 [2024-07-15 07:18:28.709159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=73902 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 73902 /var/tmp/bdevperf.sock 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73902 ']' 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.897 07:18:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.897 [2024-07-15 07:18:28.803503] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:19.897 [2024-07-15 07:18:28.803915] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73902 ] 00:14:20.156 [2024-07-15 07:18:28.951445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.156 [2024-07-15 07:18:29.029551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.156 [2024-07-15 07:18:29.063738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.089 07:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.089 07:18:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:21.089 07:18:29 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.I6ECYVYnGY 00:14:21.089 07:18:30 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:21.347 [2024-07-15 07:18:30.288608] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.605 nvme0n1 00:14:21.605 07:18:30 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:21.605 Running I/O for 1 seconds... 00:14:22.980 00:14:22.980 Latency(us) 00:14:22.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.980 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:22.980 Verification LBA range: start 0x0 length 0x2000 00:14:22.980 nvme0n1 : 1.02 3891.60 15.20 0.00 0.00 32526.94 7298.33 26691.03 00:14:22.980 =================================================================================================================== 00:14:22.980 Total : 3891.60 15.20 0.00 0.00 32526.94 7298.33 26691.03 00:14:22.980 0 00:14:22.980 07:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:22.980 07:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.980 07:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.980 07:18:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.980 07:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:22.980 "subsystems": [ 00:14:22.980 { 00:14:22.981 "subsystem": "keyring", 00:14:22.981 "config": [ 00:14:22.981 { 00:14:22.981 "method": "keyring_file_add_key", 00:14:22.981 "params": { 00:14:22.981 "name": "key0", 00:14:22.981 "path": "/tmp/tmp.I6ECYVYnGY" 00:14:22.981 } 00:14:22.981 } 00:14:22.981 ] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "iobuf", 00:14:22.981 "config": [ 00:14:22.981 { 00:14:22.981 "method": "iobuf_set_options", 00:14:22.981 "params": { 00:14:22.981 "small_pool_count": 8192, 00:14:22.981 "large_pool_count": 1024, 00:14:22.981 "small_bufsize": 8192, 00:14:22.981 "large_bufsize": 135168 00:14:22.981 } 00:14:22.981 } 00:14:22.981 ] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "sock", 00:14:22.981 "config": [ 00:14:22.981 { 00:14:22.981 "method": "sock_set_default_impl", 00:14:22.981 "params": { 00:14:22.981 "impl_name": "uring" 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "sock_impl_set_options", 00:14:22.981 "params": { 00:14:22.981 "impl_name": "ssl", 00:14:22.981 "recv_buf_size": 4096, 00:14:22.981 "send_buf_size": 4096, 00:14:22.981 "enable_recv_pipe": true, 00:14:22.981 "enable_quickack": false, 00:14:22.981 "enable_placement_id": 0, 00:14:22.981 "enable_zerocopy_send_server": true, 00:14:22.981 "enable_zerocopy_send_client": false, 00:14:22.981 "zerocopy_threshold": 0, 00:14:22.981 "tls_version": 0, 00:14:22.981 "enable_ktls": false 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "sock_impl_set_options", 00:14:22.981 "params": { 00:14:22.981 "impl_name": "posix", 00:14:22.981 "recv_buf_size": 2097152, 00:14:22.981 "send_buf_size": 2097152, 00:14:22.981 "enable_recv_pipe": true, 00:14:22.981 "enable_quickack": false, 00:14:22.981 "enable_placement_id": 0, 00:14:22.981 "enable_zerocopy_send_server": true, 00:14:22.981 "enable_zerocopy_send_client": false, 00:14:22.981 "zerocopy_threshold": 0, 00:14:22.981 "tls_version": 0, 00:14:22.981 "enable_ktls": false 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "sock_impl_set_options", 00:14:22.981 "params": { 00:14:22.981 "impl_name": "uring", 00:14:22.981 "recv_buf_size": 2097152, 00:14:22.981 "send_buf_size": 2097152, 00:14:22.981 "enable_recv_pipe": true, 00:14:22.981 "enable_quickack": false, 00:14:22.981 "enable_placement_id": 0, 00:14:22.981 "enable_zerocopy_send_server": false, 00:14:22.981 "enable_zerocopy_send_client": false, 00:14:22.981 "zerocopy_threshold": 0, 00:14:22.981 "tls_version": 0, 00:14:22.981 "enable_ktls": false 00:14:22.981 } 00:14:22.981 } 00:14:22.981 ] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "vmd", 00:14:22.981 "config": [] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "accel", 00:14:22.981 "config": [ 00:14:22.981 { 00:14:22.981 "method": "accel_set_options", 00:14:22.981 "params": { 00:14:22.981 "small_cache_size": 128, 00:14:22.981 "large_cache_size": 16, 00:14:22.981 "task_count": 2048, 00:14:22.981 "sequence_count": 2048, 00:14:22.981 "buf_count": 2048 00:14:22.981 } 00:14:22.981 } 00:14:22.981 ] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "bdev", 00:14:22.981 "config": [ 00:14:22.981 { 00:14:22.981 "method": "bdev_set_options", 00:14:22.981 "params": { 00:14:22.981 "bdev_io_pool_size": 65535, 00:14:22.981 "bdev_io_cache_size": 256, 00:14:22.981 "bdev_auto_examine": true, 00:14:22.981 "iobuf_small_cache_size": 128, 00:14:22.981 "iobuf_large_cache_size": 16 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "bdev_raid_set_options", 00:14:22.981 "params": { 00:14:22.981 "process_window_size_kb": 1024 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "bdev_iscsi_set_options", 00:14:22.981 "params": { 00:14:22.981 "timeout_sec": 30 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "bdev_nvme_set_options", 00:14:22.981 "params": { 00:14:22.981 "action_on_timeout": "none", 00:14:22.981 "timeout_us": 0, 00:14:22.981 "timeout_admin_us": 0, 00:14:22.981 "keep_alive_timeout_ms": 10000, 00:14:22.981 "arbitration_burst": 0, 00:14:22.981 "low_priority_weight": 0, 00:14:22.981 "medium_priority_weight": 0, 00:14:22.981 "high_priority_weight": 0, 00:14:22.981 "nvme_adminq_poll_period_us": 10000, 00:14:22.981 "nvme_ioq_poll_period_us": 0, 00:14:22.981 "io_queue_requests": 0, 00:14:22.981 "delay_cmd_submit": true, 00:14:22.981 "transport_retry_count": 4, 00:14:22.981 "bdev_retry_count": 3, 00:14:22.981 "transport_ack_timeout": 0, 00:14:22.981 "ctrlr_loss_timeout_sec": 0, 00:14:22.981 "reconnect_delay_sec": 0, 00:14:22.981 "fast_io_fail_timeout_sec": 0, 00:14:22.981 "disable_auto_failback": false, 00:14:22.981 "generate_uuids": false, 00:14:22.981 "transport_tos": 0, 00:14:22.981 "nvme_error_stat": false, 00:14:22.981 "rdma_srq_size": 0, 00:14:22.981 "io_path_stat": false, 00:14:22.981 "allow_accel_sequence": false, 00:14:22.981 "rdma_max_cq_size": 0, 00:14:22.981 "rdma_cm_event_timeout_ms": 0, 00:14:22.981 "dhchap_digests": [ 00:14:22.981 "sha256", 00:14:22.981 "sha384", 00:14:22.981 "sha512" 00:14:22.981 ], 00:14:22.981 "dhchap_dhgroups": [ 00:14:22.981 "null", 00:14:22.981 "ffdhe2048", 00:14:22.981 "ffdhe3072", 00:14:22.981 "ffdhe4096", 00:14:22.981 "ffdhe6144", 00:14:22.981 "ffdhe8192" 00:14:22.981 ] 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "bdev_nvme_set_hotplug", 00:14:22.981 "params": { 00:14:22.981 "period_us": 100000, 00:14:22.981 "enable": false 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "bdev_malloc_create", 00:14:22.981 "params": { 00:14:22.981 "name": "malloc0", 00:14:22.981 "num_blocks": 8192, 00:14:22.981 "block_size": 4096, 00:14:22.981 "physical_block_size": 4096, 00:14:22.981 "uuid": "5b4eed3a-614b-4a51-9aad-1019f418b407", 00:14:22.981 "optimal_io_boundary": 0 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "bdev_wait_for_examine" 00:14:22.981 } 00:14:22.981 ] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "nbd", 00:14:22.981 "config": [] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "scheduler", 00:14:22.981 "config": [ 00:14:22.981 { 00:14:22.981 "method": "framework_set_scheduler", 00:14:22.981 "params": { 00:14:22.981 "name": "static" 00:14:22.981 } 00:14:22.981 } 00:14:22.981 ] 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "subsystem": "nvmf", 00:14:22.981 "config": [ 00:14:22.981 { 00:14:22.981 "method": "nvmf_set_config", 00:14:22.981 "params": { 00:14:22.981 "discovery_filter": "match_any", 00:14:22.981 "admin_cmd_passthru": { 00:14:22.981 "identify_ctrlr": false 00:14:22.981 } 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "nvmf_set_max_subsystems", 00:14:22.981 "params": { 00:14:22.981 "max_subsystems": 1024 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "nvmf_set_crdt", 00:14:22.981 "params": { 00:14:22.981 "crdt1": 0, 00:14:22.981 "crdt2": 0, 00:14:22.981 "crdt3": 0 00:14:22.981 } 00:14:22.981 }, 00:14:22.981 { 00:14:22.981 "method": "nvmf_create_transport", 00:14:22.981 "params": { 00:14:22.981 "trtype": "TCP", 00:14:22.982 "max_queue_depth": 128, 00:14:22.982 "max_io_qpairs_per_ctrlr": 127, 00:14:22.982 "in_capsule_data_size": 4096, 00:14:22.982 "max_io_size": 131072, 00:14:22.982 "io_unit_size": 131072, 00:14:22.982 "max_aq_depth": 128, 00:14:22.982 "num_shared_buffers": 511, 00:14:22.982 "buf_cache_size": 4294967295, 00:14:22.982 "dif_insert_or_strip": false, 00:14:22.982 "zcopy": false, 00:14:22.982 "c2h_success": false, 00:14:22.982 "sock_priority": 0, 00:14:22.982 "abort_timeout_sec": 1, 00:14:22.982 "ack_timeout": 0, 00:14:22.982 "data_wr_pool_size": 0 00:14:22.982 } 00:14:22.982 }, 00:14:22.982 { 00:14:22.982 "method": "nvmf_create_subsystem", 00:14:22.982 "params": { 00:14:22.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.982 "allow_any_host": false, 00:14:22.982 "serial_number": "00000000000000000000", 00:14:22.982 "model_number": "SPDK bdev Controller", 00:14:22.982 "max_namespaces": 32, 00:14:22.982 "min_cntlid": 1, 00:14:22.982 "max_cntlid": 65519, 00:14:22.982 "ana_reporting": false 00:14:22.982 } 00:14:22.982 }, 00:14:22.982 { 00:14:22.982 "method": "nvmf_subsystem_add_host", 00:14:22.982 "params": { 00:14:22.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.982 "host": "nqn.2016-06.io.spdk:host1", 00:14:22.982 "psk": "key0" 00:14:22.982 } 00:14:22.982 }, 00:14:22.982 { 00:14:22.982 "method": "nvmf_subsystem_add_ns", 00:14:22.982 "params": { 00:14:22.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.982 "namespace": { 00:14:22.982 "nsid": 1, 00:14:22.982 "bdev_name": "malloc0", 00:14:22.982 "nguid": "5B4EED3A614B4A519AAD1019F418B407", 00:14:22.982 "uuid": "5b4eed3a-614b-4a51-9aad-1019f418b407", 00:14:22.982 "no_auto_visible": false 00:14:22.982 } 00:14:22.982 } 00:14:22.982 }, 00:14:22.982 { 00:14:22.982 "method": "nvmf_subsystem_add_listener", 00:14:22.982 "params": { 00:14:22.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.982 "listen_address": { 00:14:22.982 "trtype": "TCP", 00:14:22.982 "adrfam": "IPv4", 00:14:22.982 "traddr": "10.0.0.2", 00:14:22.982 "trsvcid": "4420" 00:14:22.982 }, 00:14:22.982 "secure_channel": true 00:14:22.982 } 00:14:22.982 } 00:14:22.982 ] 00:14:22.982 } 00:14:22.982 ] 00:14:22.982 }' 00:14:22.982 07:18:31 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:23.241 "subsystems": [ 00:14:23.241 { 00:14:23.241 "subsystem": "keyring", 00:14:23.241 "config": [ 00:14:23.241 { 00:14:23.241 "method": "keyring_file_add_key", 00:14:23.241 "params": { 00:14:23.241 "name": "key0", 00:14:23.241 "path": "/tmp/tmp.I6ECYVYnGY" 00:14:23.241 } 00:14:23.241 } 00:14:23.241 ] 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "subsystem": "iobuf", 00:14:23.241 "config": [ 00:14:23.241 { 00:14:23.241 "method": "iobuf_set_options", 00:14:23.241 "params": { 00:14:23.241 "small_pool_count": 8192, 00:14:23.241 "large_pool_count": 1024, 00:14:23.241 "small_bufsize": 8192, 00:14:23.241 "large_bufsize": 135168 00:14:23.241 } 00:14:23.241 } 00:14:23.241 ] 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "subsystem": "sock", 00:14:23.241 "config": [ 00:14:23.241 { 00:14:23.241 "method": "sock_set_default_impl", 00:14:23.241 "params": { 00:14:23.241 "impl_name": "uring" 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "sock_impl_set_options", 00:14:23.241 "params": { 00:14:23.241 "impl_name": "ssl", 00:14:23.241 "recv_buf_size": 4096, 00:14:23.241 "send_buf_size": 4096, 00:14:23.241 "enable_recv_pipe": true, 00:14:23.241 "enable_quickack": false, 00:14:23.241 "enable_placement_id": 0, 00:14:23.241 "enable_zerocopy_send_server": true, 00:14:23.241 "enable_zerocopy_send_client": false, 00:14:23.241 "zerocopy_threshold": 0, 00:14:23.241 "tls_version": 0, 00:14:23.241 "enable_ktls": false 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "sock_impl_set_options", 00:14:23.241 "params": { 00:14:23.241 "impl_name": "posix", 00:14:23.241 "recv_buf_size": 2097152, 00:14:23.241 "send_buf_size": 2097152, 00:14:23.241 "enable_recv_pipe": true, 00:14:23.241 "enable_quickack": false, 00:14:23.241 "enable_placement_id": 0, 00:14:23.241 "enable_zerocopy_send_server": true, 00:14:23.241 "enable_zerocopy_send_client": false, 00:14:23.241 "zerocopy_threshold": 0, 00:14:23.241 "tls_version": 0, 00:14:23.241 "enable_ktls": false 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "sock_impl_set_options", 00:14:23.241 "params": { 00:14:23.241 "impl_name": "uring", 00:14:23.241 "recv_buf_size": 2097152, 00:14:23.241 "send_buf_size": 2097152, 00:14:23.241 "enable_recv_pipe": true, 00:14:23.241 "enable_quickack": false, 00:14:23.241 "enable_placement_id": 0, 00:14:23.241 "enable_zerocopy_send_server": false, 00:14:23.241 "enable_zerocopy_send_client": false, 00:14:23.241 "zerocopy_threshold": 0, 00:14:23.241 "tls_version": 0, 00:14:23.241 "enable_ktls": false 00:14:23.241 } 00:14:23.241 } 00:14:23.241 ] 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "subsystem": "vmd", 00:14:23.241 "config": [] 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "subsystem": "accel", 00:14:23.241 "config": [ 00:14:23.241 { 00:14:23.241 "method": "accel_set_options", 00:14:23.241 "params": { 00:14:23.241 "small_cache_size": 128, 00:14:23.241 "large_cache_size": 16, 00:14:23.241 "task_count": 2048, 00:14:23.241 "sequence_count": 2048, 00:14:23.241 "buf_count": 2048 00:14:23.241 } 00:14:23.241 } 00:14:23.241 ] 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "subsystem": "bdev", 00:14:23.241 "config": [ 00:14:23.241 { 00:14:23.241 "method": "bdev_set_options", 00:14:23.241 "params": { 00:14:23.241 "bdev_io_pool_size": 65535, 00:14:23.241 "bdev_io_cache_size": 256, 00:14:23.241 "bdev_auto_examine": true, 00:14:23.241 "iobuf_small_cache_size": 128, 00:14:23.241 "iobuf_large_cache_size": 16 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "bdev_raid_set_options", 00:14:23.241 "params": { 00:14:23.241 "process_window_size_kb": 1024 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "bdev_iscsi_set_options", 00:14:23.241 "params": { 00:14:23.241 "timeout_sec": 30 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "bdev_nvme_set_options", 00:14:23.241 "params": { 00:14:23.241 "action_on_timeout": "none", 00:14:23.241 "timeout_us": 0, 00:14:23.241 "timeout_admin_us": 0, 00:14:23.241 "keep_alive_timeout_ms": 10000, 00:14:23.241 "arbitration_burst": 0, 00:14:23.241 "low_priority_weight": 0, 00:14:23.241 "medium_priority_weight": 0, 00:14:23.241 "high_priority_weight": 0, 00:14:23.241 "nvme_adminq_poll_period_us": 10000, 00:14:23.241 "nvme_ioq_poll_period_us": 0, 00:14:23.241 "io_queue_requests": 512, 00:14:23.241 "delay_cmd_submit": true, 00:14:23.241 "transport_retry_count": 4, 00:14:23.241 "bdev_retry_count": 3, 00:14:23.241 "transport_ack_timeout": 0, 00:14:23.241 "ctrlr_loss_timeout_sec": 0, 00:14:23.241 "reconnect_delay_sec": 0, 00:14:23.241 "fast_io_fail_timeout_sec": 0, 00:14:23.241 "disable_auto_failback": false, 00:14:23.241 "generate_uuids": false, 00:14:23.241 "transport_tos": 0, 00:14:23.241 "nvme_error_stat": false, 00:14:23.241 "rdma_srq_size": 0, 00:14:23.241 "io_path_stat": false, 00:14:23.241 "allow_accel_sequence": false, 00:14:23.241 "rdma_max_cq_size": 0, 00:14:23.241 "rdma_cm_event_timeout_ms": 0, 00:14:23.241 "dhchap_digests": [ 00:14:23.241 "sha256", 00:14:23.241 "sha384", 00:14:23.241 "sha512" 00:14:23.241 ], 00:14:23.241 "dhchap_dhgroups": [ 00:14:23.241 "null", 00:14:23.241 "ffdhe2048", 00:14:23.241 "ffdhe3072", 00:14:23.241 "ffdhe4096", 00:14:23.241 "ffdhe6144", 00:14:23.241 "ffdhe8192" 00:14:23.241 ] 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "bdev_nvme_attach_controller", 00:14:23.241 "params": { 00:14:23.241 "name": "nvme0", 00:14:23.241 "trtype": "TCP", 00:14:23.241 "adrfam": "IPv4", 00:14:23.241 "traddr": "10.0.0.2", 00:14:23.241 "trsvcid": "4420", 00:14:23.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.241 "prchk_reftag": false, 00:14:23.241 "prchk_guard": false, 00:14:23.241 "ctrlr_loss_timeout_sec": 0, 00:14:23.241 "reconnect_delay_sec": 0, 00:14:23.241 "fast_io_fail_timeout_sec": 0, 00:14:23.241 "psk": "key0", 00:14:23.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.241 "hdgst": false, 00:14:23.241 "ddgst": false 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "bdev_nvme_set_hotplug", 00:14:23.241 "params": { 00:14:23.241 "period_us": 100000, 00:14:23.241 "enable": false 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "bdev_enable_histogram", 00:14:23.241 "params": { 00:14:23.241 "name": "nvme0n1", 00:14:23.241 "enable": true 00:14:23.241 } 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "method": "bdev_wait_for_examine" 00:14:23.241 } 00:14:23.241 ] 00:14:23.241 }, 00:14:23.241 { 00:14:23.241 "subsystem": "nbd", 00:14:23.241 "config": [] 00:14:23.241 } 00:14:23.241 ] 00:14:23.241 }' 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 73902 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73902 ']' 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73902 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73902 00:14:23.241 killing process with pid 73902 00:14:23.241 Received shutdown signal, test time was about 1.000000 seconds 00:14:23.241 00:14:23.241 Latency(us) 00:14:23.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.241 =================================================================================================================== 00:14:23.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.241 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.242 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73902' 00:14:23.242 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73902 00:14:23.242 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73902 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 73870 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73870 ']' 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73870 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73870 00:14:23.501 killing process with pid 73870 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73870' 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73870 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73870 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.501 07:18:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:23.501 "subsystems": [ 00:14:23.501 { 00:14:23.501 "subsystem": "keyring", 00:14:23.501 "config": [ 00:14:23.501 { 00:14:23.501 "method": "keyring_file_add_key", 00:14:23.501 "params": { 00:14:23.501 "name": "key0", 00:14:23.501 "path": "/tmp/tmp.I6ECYVYnGY" 00:14:23.501 } 00:14:23.501 } 00:14:23.501 ] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "iobuf", 00:14:23.501 "config": [ 00:14:23.501 { 00:14:23.501 "method": "iobuf_set_options", 00:14:23.501 "params": { 00:14:23.501 "small_pool_count": 8192, 00:14:23.501 "large_pool_count": 1024, 00:14:23.501 "small_bufsize": 8192, 00:14:23.501 "large_bufsize": 135168 00:14:23.501 } 00:14:23.501 } 00:14:23.501 ] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "sock", 00:14:23.501 "config": [ 00:14:23.501 { 00:14:23.501 "method": "sock_set_default_impl", 00:14:23.501 "params": { 00:14:23.501 "impl_name": "uring" 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "sock_impl_set_options", 00:14:23.501 "params": { 00:14:23.501 "impl_name": "ssl", 00:14:23.501 "recv_buf_size": 4096, 00:14:23.501 "send_buf_size": 4096, 00:14:23.501 "enable_recv_pipe": true, 00:14:23.501 "enable_quickack": false, 00:14:23.501 "enable_placement_id": 0, 00:14:23.501 "enable_zerocopy_send_server": true, 00:14:23.501 "enable_zerocopy_send_client": false, 00:14:23.501 "zerocopy_threshold": 0, 00:14:23.501 "tls_version": 0, 00:14:23.501 "enable_ktls": false 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "sock_impl_set_options", 00:14:23.501 "params": { 00:14:23.501 "impl_name": "posix", 00:14:23.501 "recv_buf_size": 2097152, 00:14:23.501 "send_buf_size": 2097152, 00:14:23.501 "enable_recv_pipe": true, 00:14:23.501 "enable_quickack": false, 00:14:23.501 "enable_placement_id": 0, 00:14:23.501 "enable_zerocopy_send_server": true, 00:14:23.501 "enable_zerocopy_send_client": false, 00:14:23.501 "zerocopy_threshold": 0, 00:14:23.501 "tls_version": 0, 00:14:23.501 "enable_ktls": false 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "sock_impl_set_options", 00:14:23.501 "params": { 00:14:23.501 "impl_name": "uring", 00:14:23.501 "recv_buf_size": 2097152, 00:14:23.501 "send_buf_size": 2097152, 00:14:23.501 "enable_recv_pipe": true, 00:14:23.501 "enable_quickack": false, 00:14:23.501 "enable_placement_id": 0, 00:14:23.501 "enable_zerocopy_send_server": false, 00:14:23.501 "enable_zerocopy_send_client": false, 00:14:23.501 "zerocopy_threshold": 0, 00:14:23.501 "tls_version": 0, 00:14:23.501 "enable_ktls": false 00:14:23.501 } 00:14:23.501 } 00:14:23.501 ] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "vmd", 00:14:23.501 "config": [] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "accel", 00:14:23.501 "config": [ 00:14:23.501 { 00:14:23.501 "method": "accel_set_options", 00:14:23.501 "params": { 00:14:23.501 "small_cache_size": 128, 00:14:23.501 "large_cache_size": 16, 00:14:23.501 "task_count": 2048, 00:14:23.501 "sequence_count": 2048, 00:14:23.501 "buf_count": 2048 00:14:23.501 } 00:14:23.501 } 00:14:23.501 ] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "bdev", 00:14:23.501 "config": [ 00:14:23.501 { 00:14:23.501 "method": "bdev_set_options", 00:14:23.501 "params": { 00:14:23.501 "bdev_io_pool_size": 65535, 00:14:23.501 "bdev_io_cache_size": 256, 00:14:23.501 "bdev_auto_examine": true, 00:14:23.501 "iobuf_small_cache_size": 128, 00:14:23.501 "iobuf_large_cache_size": 16 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "bdev_raid_set_options", 00:14:23.501 "params": { 00:14:23.501 "process_window_size_kb": 1024 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "bdev_iscsi_set_options", 00:14:23.501 "params": { 00:14:23.501 "timeout_sec": 30 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "bdev_nvme_set_options", 00:14:23.501 "params": { 00:14:23.501 "action_on_timeout": "none", 00:14:23.501 "timeout_us": 0, 00:14:23.501 "timeout_admin_us": 0, 00:14:23.501 "keep_alive_timeout_ms": 10000, 00:14:23.501 "arbitration_burst": 0, 00:14:23.501 "low_priority_weight": 0, 00:14:23.501 "medium_priority_weight": 0, 00:14:23.501 "high_priority_weight": 0, 00:14:23.501 "nvme_adminq_poll_period_us": 10000, 00:14:23.501 "nvme_ioq_poll_period_us": 0, 00:14:23.501 "io_queue_requests": 0, 00:14:23.501 "delay_cmd_submit": true, 00:14:23.501 "transport_retry_count": 4, 00:14:23.501 "bdev_retry_count": 3, 00:14:23.501 "transport_ack_timeout": 0, 00:14:23.501 "ctrlr_loss_timeout_sec": 0, 00:14:23.501 "reconnect_delay_sec": 0, 00:14:23.501 "fast_io_fail_timeout_sec": 0, 00:14:23.501 "disable_auto_failback": false, 00:14:23.501 "generate_uuids": false, 00:14:23.501 "transport_tos": 0, 00:14:23.501 "nvme_error_stat": false, 00:14:23.501 "rdma_srq_size": 0, 00:14:23.501 "io_path_stat": false, 00:14:23.501 "allow_accel_sequence": false, 00:14:23.501 "rdma_max_cq_size": 0, 00:14:23.501 "rdma_cm_event_timeout_ms": 0, 00:14:23.501 "dhchap_digests": [ 00:14:23.501 "sha256", 00:14:23.501 "sha384", 00:14:23.501 "sha512" 00:14:23.501 ], 00:14:23.501 "dhchap_dhgroups": [ 00:14:23.501 "null", 00:14:23.501 "ffdhe2048", 00:14:23.501 "ffdhe3072", 00:14:23.501 "ffdhe4096", 00:14:23.501 "ffdhe6144", 00:14:23.501 "ffdhe8192" 00:14:23.501 ] 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "bdev_nvme_set_hotplug", 00:14:23.501 "params": { 00:14:23.501 "period_us": 100000, 00:14:23.501 "enable": false 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "bdev_malloc_create", 00:14:23.501 "params": { 00:14:23.501 "name": "malloc0", 00:14:23.501 "num_blocks": 8192, 00:14:23.501 "block_size": 4096, 00:14:23.501 "physical_block_size": 4096, 00:14:23.501 "uuid": "5b4eed3a-614b-4a51-9aad-1019f418b407", 00:14:23.501 "optimal_io_boundary": 0 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "bdev_wait_for_examine" 00:14:23.501 } 00:14:23.501 ] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "nbd", 00:14:23.501 "config": [] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "scheduler", 00:14:23.501 "config": [ 00:14:23.501 { 00:14:23.501 "method": "framework_set_scheduler", 00:14:23.501 "params": { 00:14:23.501 "name": "static" 00:14:23.501 } 00:14:23.501 } 00:14:23.501 ] 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "subsystem": "nvmf", 00:14:23.501 "config": [ 00:14:23.501 { 00:14:23.501 "method": "nvmf_set_config", 00:14:23.501 "params": { 00:14:23.501 "discovery_filter": "match_any", 00:14:23.501 "admin_cmd_passthru": { 00:14:23.501 "identify_ctrlr": false 00:14:23.501 } 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "nvmf_set_max_subsystems", 00:14:23.501 "params": { 00:14:23.501 "max_subsystems": 1024 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "nvmf_set_crdt", 00:14:23.501 "params": { 00:14:23.501 "crdt1": 0, 00:14:23.501 "crdt2": 0, 00:14:23.501 "crdt3": 0 00:14:23.501 } 00:14:23.501 }, 00:14:23.501 { 00:14:23.501 "method": "nvmf_create_transport", 00:14:23.501 "params": { 00:14:23.501 "trtype": "TCP", 00:14:23.501 "max_queue_depth": 128, 00:14:23.501 "max_io_qpairs_per_ctrlr": 127, 00:14:23.501 "in_capsule_data_size": 4096, 00:14:23.501 "max_io_size": 131072, 00:14:23.501 "io_unit_size": 131072, 00:14:23.501 "max_aq_depth": 128, 00:14:23.501 "num_shared_buffers": 511, 00:14:23.501 "buf_cache_size": 4294967295, 00:14:23.502 "dif_insert_or_strip": false, 00:14:23.502 "zcopy": false, 00:14:23.502 "c2h_success": false, 00:14:23.502 "sock_priority": 0, 00:14:23.502 "abort_timeout_sec": 1, 00:14:23.502 "ack_timeout": 0, 00:14:23.502 "data_wr_pool_size": 0 00:14:23.502 } 00:14:23.502 }, 00:14:23.502 { 00:14:23.502 "method": "nvmf_create_subsystem", 00:14:23.502 "params": { 00:14:23.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.502 "allow_any_host": false, 00:14:23.502 "serial_number": "00000000000000000000", 00:14:23.502 "model_number": "SPDK bdev Controller", 00:14:23.502 "max_namespaces": 32, 00:14:23.502 "min_cntlid": 1, 00:14:23.502 "max_cntlid": 65519, 00:14:23.502 "ana_reporting": false 00:14:23.502 } 00:14:23.502 }, 00:14:23.502 { 00:14:23.502 "method": "nvmf_subsystem_add_host", 00:14:23.502 "params": { 00:14:23.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.502 "host": "nqn.2016-06.io.spdk:host1", 00:14:23.502 "psk": "key0" 00:14:23.502 } 00:14:23.502 }, 00:14:23.502 { 00:14:23.502 "method": "nvmf_subsystem_add_ns", 00:14:23.502 "params": { 00:14:23.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.502 "namespace": { 00:14:23.502 "nsid": 1, 00:14:23.502 "bdev_name": "malloc0", 00:14:23.502 "nguid": "5B4EED3A614B4A519AAD1019F418B407", 00:14:23.502 "uuid": "5b4eed3a-614b-4a51-9aad-1019f418b407", 00:14:23.502 "no_auto_visible": false 00:14:23.502 } 00:14:23.502 } 00:14:23.502 }, 00:14:23.502 { 00:14:23.502 "method": "nvmf_subsystem_add_listener", 00:14:23.502 "params": { 00:14:23.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.502 "listen_address": { 00:14:23.502 "trtype": "TCP", 00:14:23.502 "adrfam": "IPv4", 00:14:23.502 "traddr": "10.0.0.2", 00:14:23.502 "trsvcid": "4420" 00:14:23.502 }, 00:14:23.502 "secure_channel": true 00:14:23.502 } 00:14:23.502 } 00:14:23.502 ] 00:14:23.502 } 00:14:23.502 ] 00:14:23.502 }' 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73964 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73964 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73964 ']' 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.502 07:18:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.760 [2024-07-15 07:18:32.464177] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:23.760 [2024-07-15 07:18:32.464273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.760 [2024-07-15 07:18:32.605786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.760 [2024-07-15 07:18:32.675001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.760 [2024-07-15 07:18:32.675063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.760 [2024-07-15 07:18:32.675097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.760 [2024-07-15 07:18:32.675109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.760 [2024-07-15 07:18:32.675118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.760 [2024-07-15 07:18:32.675216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.018 [2024-07-15 07:18:32.821542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:24.018 [2024-07-15 07:18:32.881334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.018 [2024-07-15 07:18:32.913266] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:24.018 [2024-07-15 07:18:32.913498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=73996 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 73996 /var/tmp/bdevperf.sock 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73996 ']' 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.584 07:18:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:24.584 "subsystems": [ 00:14:24.584 { 00:14:24.584 "subsystem": "keyring", 00:14:24.584 "config": [ 00:14:24.584 { 00:14:24.584 "method": "keyring_file_add_key", 00:14:24.584 "params": { 00:14:24.584 "name": "key0", 00:14:24.584 "path": "/tmp/tmp.I6ECYVYnGY" 00:14:24.584 } 00:14:24.584 } 00:14:24.584 ] 00:14:24.584 }, 00:14:24.584 { 00:14:24.584 "subsystem": "iobuf", 00:14:24.584 "config": [ 00:14:24.584 { 00:14:24.584 "method": "iobuf_set_options", 00:14:24.584 "params": { 00:14:24.584 "small_pool_count": 8192, 00:14:24.584 "large_pool_count": 1024, 00:14:24.584 "small_bufsize": 8192, 00:14:24.584 "large_bufsize": 135168 00:14:24.584 } 00:14:24.584 } 00:14:24.584 ] 00:14:24.584 }, 00:14:24.584 { 00:14:24.585 "subsystem": "sock", 00:14:24.585 "config": [ 00:14:24.585 { 00:14:24.585 "method": "sock_set_default_impl", 00:14:24.585 "params": { 00:14:24.585 "impl_name": "uring" 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "sock_impl_set_options", 00:14:24.585 "params": { 00:14:24.585 "impl_name": "ssl", 00:14:24.585 "recv_buf_size": 4096, 00:14:24.585 "send_buf_size": 4096, 00:14:24.585 "enable_recv_pipe": true, 00:14:24.585 "enable_quickack": false, 00:14:24.585 "enable_placement_id": 0, 00:14:24.585 "enable_zerocopy_send_server": true, 00:14:24.585 "enable_zerocopy_send_client": false, 00:14:24.585 "zerocopy_threshold": 0, 00:14:24.585 "tls_version": 0, 00:14:24.585 "enable_ktls": false 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "sock_impl_set_options", 00:14:24.585 "params": { 00:14:24.585 "impl_name": "posix", 00:14:24.585 "recv_buf_size": 2097152, 00:14:24.585 "send_buf_size": 2097152, 00:14:24.585 "enable_recv_pipe": true, 00:14:24.585 "enable_quickack": false, 00:14:24.585 "enable_placement_id": 0, 00:14:24.585 "enable_zerocopy_send_server": true, 00:14:24.585 "enable_zerocopy_send_client": false, 00:14:24.585 "zerocopy_threshold": 0, 00:14:24.585 "tls_version": 0, 00:14:24.585 "enable_ktls": false 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "sock_impl_set_options", 00:14:24.585 "params": { 00:14:24.585 "impl_name": "uring", 00:14:24.585 "recv_buf_size": 2097152, 00:14:24.585 "send_buf_size": 2097152, 00:14:24.585 "enable_recv_pipe": true, 00:14:24.585 "enable_quickack": false, 00:14:24.585 "enable_placement_id": 0, 00:14:24.585 "enable_zerocopy_send_server": false, 00:14:24.585 "enable_zerocopy_send_client": false, 00:14:24.585 "zerocopy_threshold": 0, 00:14:24.585 "tls_version": 0, 00:14:24.585 "enable_ktls": false 00:14:24.585 } 00:14:24.585 } 00:14:24.585 ] 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "subsystem": "vmd", 00:14:24.585 "config": [] 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "subsystem": "accel", 00:14:24.585 "config": [ 00:14:24.585 { 00:14:24.585 "method": "accel_set_options", 00:14:24.585 "params": { 00:14:24.585 "small_cache_size": 128, 00:14:24.585 "large_cache_size": 16, 00:14:24.585 "task_count": 2048, 00:14:24.585 "sequence_count": 2048, 00:14:24.585 "buf_count": 2048 00:14:24.585 } 00:14:24.585 } 00:14:24.585 ] 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "subsystem": "bdev", 00:14:24.585 "config": [ 00:14:24.585 { 00:14:24.585 "method": "bdev_set_options", 00:14:24.585 "params": { 00:14:24.585 "bdev_io_pool_size": 65535, 00:14:24.585 "bdev_io_cache_size": 256, 00:14:24.585 "bdev_auto_examine": true, 00:14:24.585 "iobuf_small_cache_size": 128, 00:14:24.585 "iobuf_large_cache_size": 16 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "bdev_raid_set_options", 00:14:24.585 "params": { 00:14:24.585 "process_window_size_kb": 1024 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "bdev_iscsi_set_options", 00:14:24.585 "params": { 00:14:24.585 "timeout_sec": 30 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "bdev_nvme_set_options", 00:14:24.585 "params": { 00:14:24.585 "action_on_timeout": "none", 00:14:24.585 "timeout_us": 0, 00:14:24.585 "timeout_admin_us": 0, 00:14:24.585 "keep_alive_timeout_ms": 10000, 00:14:24.585 "arbitration_burst": 0, 00:14:24.585 "low_priority_weight": 0, 00:14:24.585 "medium_priority_weight": 0, 00:14:24.585 "high_priority_weight": 0, 00:14:24.585 "nvme_adminq_poll_period_us": 10000, 00:14:24.585 "nvme_ioq_poll_period_us": 0, 00:14:24.585 "io_queue_requests": 512, 00:14:24.585 "delay_cmd_submit": true, 00:14:24.585 "transport_retry_count": 4, 00:14:24.585 "bdev_retry_count": 3, 00:14:24.585 "transport_ack_timeout": 0, 00:14:24.585 "ctrlr_loss_timeout_sec": 0, 00:14:24.585 "reconnect_delay_sec": 0, 00:14:24.585 "fast_io_fail_timeout_sec": 0, 00:14:24.585 "disable_auto_failback": false, 00:14:24.585 "generate_uuids": false, 00:14:24.585 "transport_tos": 0, 00:14:24.585 "nvme_error_stat": false, 00:14:24.585 "rdma_srq_size": 0, 00:14:24.585 "io_path_stat": false, 00:14:24.585 "allow_accel_sequence": false, 00:14:24.585 "rdma_max_cq_size": 0, 00:14:24.585 "rdma_cm_event_timeout_ms": 0, 00:14:24.585 "dhchap_digests": [ 00:14:24.585 "sha256", 00:14:24.585 "sha384", 00:14:24.585 "sha512" 00:14:24.585 ], 00:14:24.585 "dhchap_dhgroups": [ 00:14:24.585 "null", 00:14:24.585 "ffdhe2048", 00:14:24.585 "ffdhe3072", 00:14:24.585 "ffdhe4096", 00:14:24.585 "ffdhe6144", 00:14:24.585 "ffdhe8192" 00:14:24.585 ] 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "bdev_nvme_attach_controller", 00:14:24.585 "params": { 00:14:24.585 "name": "nvme0", 00:14:24.585 "trtype": "TCP", 00:14:24.585 "adrfam": "IPv4", 00:14:24.585 "traddr": "10.0.0.2", 00:14:24.585 "trsvcid": "4420", 00:14:24.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.585 "prchk_reftag": false, 00:14:24.585 "prchk_guard": false, 00:14:24.585 "ctrlr_loss_timeout_sec": 0, 00:14:24.585 "reconnect_delay_sec": 0, 00:14:24.585 "fast_io_fail_timeout_sec": 0, 00:14:24.585 "psk": "key0", 00:14:24.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.585 "hdgst": false, 00:14:24.585 "ddgst": false 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "bdev_nvme_set_hotplug", 00:14:24.585 "params": { 00:14:24.585 "period_us": 100000, 00:14:24.585 "enable": false 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "bdev_enable_histogram", 00:14:24.585 "params": { 00:14:24.585 "name": "nvme0n1", 00:14:24.585 "enable": true 00:14:24.585 } 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "method": "bdev_wait_for_examine" 00:14:24.585 } 00:14:24.585 ] 00:14:24.585 }, 00:14:24.585 { 00:14:24.585 "subsystem": "nbd", 00:14:24.585 "config": [] 00:14:24.585 } 00:14:24.585 ] 00:14:24.585 }' 00:14:24.585 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.585 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.585 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.585 07:18:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.842 [2024-07-15 07:18:33.558980] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:24.842 [2024-07-15 07:18:33.559059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73996 ] 00:14:24.842 [2024-07-15 07:18:33.693128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.842 [2024-07-15 07:18:33.764088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.101 [2024-07-15 07:18:33.878605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:25.101 [2024-07-15 07:18:33.912044] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.665 07:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.665 07:18:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:25.665 07:18:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:25.665 07:18:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:26.231 07:18:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.232 07:18:34 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:26.232 Running I/O for 1 seconds... 00:14:27.165 00:14:27.165 Latency(us) 00:14:27.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.165 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:27.165 Verification LBA range: start 0x0 length 0x2000 00:14:27.165 nvme0n1 : 1.02 3711.51 14.50 0.00 0.00 34100.54 7298.33 33602.09 00:14:27.165 =================================================================================================================== 00:14:27.165 Total : 3711.51 14.50 0.00 0.00 34100.54 7298.33 33602.09 00:14:27.165 0 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:27.165 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:27.165 nvmf_trace.0 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 73996 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73996 ']' 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73996 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73996 00:14:27.423 killing process with pid 73996 00:14:27.423 Received shutdown signal, test time was about 1.000000 seconds 00:14:27.423 00:14:27.423 Latency(us) 00:14:27.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.423 =================================================================================================================== 00:14:27.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73996' 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73996 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73996 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.423 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.423 rmmod nvme_tcp 00:14:27.682 rmmod nvme_fabrics 00:14:27.682 rmmod nvme_keyring 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73964 ']' 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73964 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73964 ']' 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73964 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73964 00:14:27.682 killing process with pid 73964 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73964' 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73964 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73964 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.682 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.940 07:18:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:27.940 07:18:36 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.X2l3ElTpXW /tmp/tmp.H7v7mznCAw /tmp/tmp.I6ECYVYnGY 00:14:27.940 ************************************ 00:14:27.940 END TEST nvmf_tls 00:14:27.940 ************************************ 00:14:27.940 00:14:27.940 real 1m24.455s 00:14:27.940 user 2m15.537s 00:14:27.940 sys 0m26.471s 00:14:27.940 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.940 07:18:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.940 07:18:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.940 07:18:36 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:27.940 07:18:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:27.940 07:18:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.940 07:18:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.940 ************************************ 00:14:27.940 START TEST nvmf_fips 00:14:27.940 ************************************ 00:14:27.940 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:27.940 * Looking for test storage... 00:14:27.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:27.941 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:27.942 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:28.200 Error setting digest 00:14:28.200 0022BB57DA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:28.200 0022BB57DA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:28.200 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:28.201 07:18:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:28.201 Cannot find device "nvmf_tgt_br" 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.201 Cannot find device "nvmf_tgt_br2" 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:28.201 Cannot find device "nvmf_tgt_br" 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:28.201 Cannot find device "nvmf_tgt_br2" 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.201 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:28.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:28.459 00:14:28.459 --- 10.0.0.2 ping statistics --- 00:14:28.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.459 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:28.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:28.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:14:28.459 00:14:28.459 --- 10.0.0.3 ping statistics --- 00:14:28.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.459 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:28.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:28.459 00:14:28.459 --- 10.0.0.1 ping statistics --- 00:14:28.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.459 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74261 00:14:28.459 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74261 00:14:28.460 07:18:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:28.460 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74261 ']' 00:14:28.460 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.460 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.460 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.460 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.460 07:18:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:28.718 [2024-07-15 07:18:37.435019] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:28.718 [2024-07-15 07:18:37.435127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.718 [2024-07-15 07:18:37.575788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.718 [2024-07-15 07:18:37.647346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.718 [2024-07-15 07:18:37.647588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.718 [2024-07-15 07:18:37.647687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.718 [2024-07-15 07:18:37.647781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.718 [2024-07-15 07:18:37.647862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.718 [2024-07-15 07:18:37.648017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.977 [2024-07-15 07:18:37.681833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:29.544 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.803 [2024-07-15 07:18:38.714515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.803 [2024-07-15 07:18:38.730444] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:29.803 [2024-07-15 07:18:38.730626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.061 [2024-07-15 07:18:38.757619] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:30.061 malloc0 00:14:30.061 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:30.061 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74295 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74295 /var/tmp/bdevperf.sock 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74295 ']' 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.062 07:18:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:30.062 [2024-07-15 07:18:38.866122] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:30.062 [2024-07-15 07:18:38.866222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74295 ] 00:14:30.062 [2024-07-15 07:18:39.007119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.320 [2024-07-15 07:18:39.076494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.320 [2024-07-15 07:18:39.109166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:30.938 07:18:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.938 07:18:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:30.938 07:18:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:31.195 [2024-07-15 07:18:40.073270] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:31.195 [2024-07-15 07:18:40.073397] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:31.195 TLSTESTn1 00:14:31.453 07:18:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.453 Running I/O for 10 seconds... 00:14:41.447 00:14:41.448 Latency(us) 00:14:41.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.448 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:41.448 Verification LBA range: start 0x0 length 0x2000 00:14:41.448 TLSTESTn1 : 10.02 3817.90 14.91 0.00 0.00 33460.69 6821.70 30384.87 00:14:41.448 =================================================================================================================== 00:14:41.448 Total : 3817.90 14.91 0.00 0.00 33460.69 6821.70 30384.87 00:14:41.448 0 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:41.448 nvmf_trace.0 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74295 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74295 ']' 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74295 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:41.448 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74295 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:41.705 killing process with pid 74295 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74295' 00:14:41.705 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.705 00:14:41.705 Latency(us) 00:14:41.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.705 =================================================================================================================== 00:14:41.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74295 00:14:41.705 [2024-07-15 07:18:50.420457] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74295 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.705 rmmod nvme_tcp 00:14:41.705 rmmod nvme_fabrics 00:14:41.705 rmmod nvme_keyring 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74261 ']' 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74261 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74261 ']' 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74261 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:41.705 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74261 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:41.963 killing process with pid 74261 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74261' 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74261 00:14:41.963 [2024-07-15 07:18:50.679784] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74261 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.963 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:41.964 00:14:41.964 real 0m14.176s 00:14:41.964 user 0m19.707s 00:14:41.964 sys 0m5.452s 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.964 07:18:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:41.964 ************************************ 00:14:41.964 END TEST nvmf_fips 00:14:41.964 ************************************ 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:42.222 07:18:50 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:42.222 07:18:50 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:42.222 07:18:50 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:42.222 07:18:50 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:42.222 07:18:50 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:42.222 07:18:50 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.222 07:18:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:42.222 ************************************ 00:14:42.222 START TEST nvmf_identify 00:14:42.222 ************************************ 00:14:42.222 07:18:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:42.222 * Looking for test storage... 00:14:42.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.222 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:42.223 Cannot find device "nvmf_tgt_br" 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.223 Cannot find device "nvmf_tgt_br2" 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:42.223 Cannot find device "nvmf_tgt_br" 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:42.223 Cannot find device "nvmf_tgt_br2" 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:42.223 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:42.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:42.481 00:14:42.481 --- 10.0.0.2 ping statistics --- 00:14:42.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.481 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:42.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:42.481 00:14:42.481 --- 10.0.0.3 ping statistics --- 00:14:42.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.481 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:42.481 00:14:42.481 --- 10.0.0.1 ping statistics --- 00:14:42.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.481 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74639 00:14:42.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74639 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74639 ']' 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.481 07:18:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:42.739 [2024-07-15 07:18:51.459106] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:42.739 [2024-07-15 07:18:51.459464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.739 [2024-07-15 07:18:51.599053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.739 [2024-07-15 07:18:51.671161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.739 [2024-07-15 07:18:51.671429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.739 [2024-07-15 07:18:51.671656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.739 [2024-07-15 07:18:51.671802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.739 [2024-07-15 07:18:51.672021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.739 [2024-07-15 07:18:51.672132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.739 [2024-07-15 07:18:51.672192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.739 [2024-07-15 07:18:51.673036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.739 [2024-07-15 07:18:51.673090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.996 [2024-07-15 07:18:51.705382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.562 [2024-07-15 07:18:52.423002] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.562 Malloc0 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.562 [2024-07-15 07:18:52.508946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.562 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.825 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.825 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:43.825 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.825 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:43.825 [ 00:14:43.825 { 00:14:43.825 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:43.825 "subtype": "Discovery", 00:14:43.825 "listen_addresses": [ 00:14:43.825 { 00:14:43.825 "trtype": "TCP", 00:14:43.825 "adrfam": "IPv4", 00:14:43.825 "traddr": "10.0.0.2", 00:14:43.825 "trsvcid": "4420" 00:14:43.825 } 00:14:43.825 ], 00:14:43.825 "allow_any_host": true, 00:14:43.825 "hosts": [] 00:14:43.825 }, 00:14:43.825 { 00:14:43.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.825 "subtype": "NVMe", 00:14:43.825 "listen_addresses": [ 00:14:43.825 { 00:14:43.825 "trtype": "TCP", 00:14:43.825 "adrfam": "IPv4", 00:14:43.825 "traddr": "10.0.0.2", 00:14:43.825 "trsvcid": "4420" 00:14:43.825 } 00:14:43.825 ], 00:14:43.825 "allow_any_host": true, 00:14:43.825 "hosts": [], 00:14:43.825 "serial_number": "SPDK00000000000001", 00:14:43.825 "model_number": "SPDK bdev Controller", 00:14:43.825 "max_namespaces": 32, 00:14:43.825 "min_cntlid": 1, 00:14:43.825 "max_cntlid": 65519, 00:14:43.825 "namespaces": [ 00:14:43.825 { 00:14:43.825 "nsid": 1, 00:14:43.825 "bdev_name": "Malloc0", 00:14:43.825 "name": "Malloc0", 00:14:43.825 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:43.825 "eui64": "ABCDEF0123456789", 00:14:43.825 "uuid": "e2bd08a5-7985-48ea-b116-67e6ac269337" 00:14:43.825 } 00:14:43.825 ] 00:14:43.825 } 00:14:43.825 ] 00:14:43.825 07:18:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.825 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:43.825 [2024-07-15 07:18:52.560424] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:43.825 [2024-07-15 07:18:52.560650] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74680 ] 00:14:43.825 [2024-07-15 07:18:52.704865] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:43.825 [2024-07-15 07:18:52.704937] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:43.825 [2024-07-15 07:18:52.704945] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:43.825 [2024-07-15 07:18:52.704958] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:43.825 [2024-07-15 07:18:52.704966] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:43.825 [2024-07-15 07:18:52.705151] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:43.825 [2024-07-15 07:18:52.705208] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc9c2c0 0 00:14:43.825 [2024-07-15 07:18:52.720097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:43.825 [2024-07-15 07:18:52.720131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:43.825 [2024-07-15 07:18:52.720139] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:43.825 [2024-07-15 07:18:52.720143] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:43.825 [2024-07-15 07:18:52.720190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.825 [2024-07-15 07:18:52.720198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.825 [2024-07-15 07:18:52.720203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.825 [2024-07-15 07:18:52.720218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:43.825 [2024-07-15 07:18:52.720253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.825 [2024-07-15 07:18:52.728101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.728132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.728139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.728160] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:43.826 [2024-07-15 07:18:52.728169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:43.826 [2024-07-15 07:18:52.728175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:43.826 [2024-07-15 07:18:52.728195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.728217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.826 [2024-07-15 07:18:52.728247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.728356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.728363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.728367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.728378] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:43.826 [2024-07-15 07:18:52.728386] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:43.826 [2024-07-15 07:18:52.728395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.728412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.826 [2024-07-15 07:18:52.728432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.728510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.728517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.728521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.728532] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:43.826 [2024-07-15 07:18:52.728541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:43.826 [2024-07-15 07:18:52.728548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.728565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.826 [2024-07-15 07:18:52.728584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.728660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.728675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.728680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.728691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:43.826 [2024-07-15 07:18:52.728703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.728720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.826 [2024-07-15 07:18:52.728739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.728813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.728828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.728833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.728843] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:43.826 [2024-07-15 07:18:52.728849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:43.826 [2024-07-15 07:18:52.728858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:43.826 [2024-07-15 07:18:52.728964] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:43.826 [2024-07-15 07:18:52.728973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:43.826 [2024-07-15 07:18:52.728984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.728993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.729001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.826 [2024-07-15 07:18:52.729022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.729115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.729141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.729149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.729160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:43.826 [2024-07-15 07:18:52.729172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.729190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.826 [2024-07-15 07:18:52.729214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.729315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.729323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.729327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.729337] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:43.826 [2024-07-15 07:18:52.729343] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:43.826 [2024-07-15 07:18:52.729351] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:43.826 [2024-07-15 07:18:52.729363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:43.826 [2024-07-15 07:18:52.729375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.729388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.826 [2024-07-15 07:18:52.729409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.729532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.826 [2024-07-15 07:18:52.729539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.826 [2024-07-15 07:18:52.729543] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729547] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9c2c0): datao=0, datal=4096, cccid=0 00:14:43.826 [2024-07-15 07:18:52.729553] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdd940) on tqpair(0xc9c2c0): expected_datao=0, payload_size=4096 00:14:43.826 [2024-07-15 07:18:52.729558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729567] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729572] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.729588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.729592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.826 [2024-07-15 07:18:52.729605] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:43.826 [2024-07-15 07:18:52.729611] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:43.826 [2024-07-15 07:18:52.729616] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:43.826 [2024-07-15 07:18:52.729621] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:43.826 [2024-07-15 07:18:52.729626] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:43.826 [2024-07-15 07:18:52.729632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:43.826 [2024-07-15 07:18:52.729641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:43.826 [2024-07-15 07:18:52.729649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.826 [2024-07-15 07:18:52.729658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.826 [2024-07-15 07:18:52.729666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:43.826 [2024-07-15 07:18:52.729686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.826 [2024-07-15 07:18:52.729768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.826 [2024-07-15 07:18:52.729775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.826 [2024-07-15 07:18:52.729779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.827 [2024-07-15 07:18:52.729792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.729808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.827 [2024-07-15 07:18:52.729815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.729830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.827 [2024-07-15 07:18:52.729846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.729860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.827 [2024-07-15 07:18:52.729867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.729882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.827 [2024-07-15 07:18:52.729887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:43.827 [2024-07-15 07:18:52.729901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:43.827 [2024-07-15 07:18:52.729909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.729913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.729921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.827 [2024-07-15 07:18:52.729942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdd940, cid 0, qid 0 00:14:43.827 [2024-07-15 07:18:52.729949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcddac0, cid 1, qid 0 00:14:43.827 [2024-07-15 07:18:52.729954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcddc40, cid 2, qid 0 00:14:43.827 [2024-07-15 07:18:52.729959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.827 [2024-07-15 07:18:52.729964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcddf40, cid 4, qid 0 00:14:43.827 [2024-07-15 07:18:52.730101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.827 [2024-07-15 07:18:52.730115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.827 [2024-07-15 07:18:52.730122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcddf40) on tqpair=0xc9c2c0 00:14:43.827 [2024-07-15 07:18:52.730138] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:43.827 [2024-07-15 07:18:52.730153] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:43.827 [2024-07-15 07:18:52.730168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.730182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.827 [2024-07-15 07:18:52.730206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcddf40, cid 4, qid 0 00:14:43.827 [2024-07-15 07:18:52.730295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.827 [2024-07-15 07:18:52.730308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.827 [2024-07-15 07:18:52.730312] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730316] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9c2c0): datao=0, datal=4096, cccid=4 00:14:43.827 [2024-07-15 07:18:52.730322] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcddf40) on tqpair(0xc9c2c0): expected_datao=0, payload_size=4096 00:14:43.827 [2024-07-15 07:18:52.730327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730334] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730339] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.827 [2024-07-15 07:18:52.730355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.827 [2024-07-15 07:18:52.730359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcddf40) on tqpair=0xc9c2c0 00:14:43.827 [2024-07-15 07:18:52.730378] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:43.827 [2024-07-15 07:18:52.730439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.730458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.827 [2024-07-15 07:18:52.730467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.730482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.827 [2024-07-15 07:18:52.730511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcddf40, cid 4, qid 0 00:14:43.827 [2024-07-15 07:18:52.730519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcde0c0, cid 5, qid 0 00:14:43.827 [2024-07-15 07:18:52.730651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.827 [2024-07-15 07:18:52.730658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.827 [2024-07-15 07:18:52.730662] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730666] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9c2c0): datao=0, datal=1024, cccid=4 00:14:43.827 [2024-07-15 07:18:52.730671] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcddf40) on tqpair(0xc9c2c0): expected_datao=0, payload_size=1024 00:14:43.827 [2024-07-15 07:18:52.730676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730683] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730688] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.827 [2024-07-15 07:18:52.730700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.827 [2024-07-15 07:18:52.730704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcde0c0) on tqpair=0xc9c2c0 00:14:43.827 [2024-07-15 07:18:52.730728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.827 [2024-07-15 07:18:52.730736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.827 [2024-07-15 07:18:52.730740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcddf40) on tqpair=0xc9c2c0 00:14:43.827 [2024-07-15 07:18:52.730757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.730770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.827 [2024-07-15 07:18:52.730795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcddf40, cid 4, qid 0 00:14:43.827 [2024-07-15 07:18:52.730882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.827 [2024-07-15 07:18:52.730889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.827 [2024-07-15 07:18:52.730893] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730897] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9c2c0): datao=0, datal=3072, cccid=4 00:14:43.827 [2024-07-15 07:18:52.730902] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcddf40) on tqpair(0xc9c2c0): expected_datao=0, payload_size=3072 00:14:43.827 [2024-07-15 07:18:52.730907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730914] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730919] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.827 [2024-07-15 07:18:52.730934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.827 [2024-07-15 07:18:52.730938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcddf40) on tqpair=0xc9c2c0 00:14:43.827 [2024-07-15 07:18:52.730953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.730958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc9c2c0) 00:14:43.827 [2024-07-15 07:18:52.730965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.827 [2024-07-15 07:18:52.730989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcddf40, cid 4, qid 0 00:14:43.827 [2024-07-15 07:18:52.731059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:43.827 [2024-07-15 07:18:52.731066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:43.827 [2024-07-15 07:18:52.731084] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.731090] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc9c2c0): datao=0, datal=8, cccid=4 00:14:43.827 [2024-07-15 07:18:52.731095] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcddf40) on tqpair(0xc9c2c0): expected_datao=0, payload_size=8 00:14:43.827 [2024-07-15 07:18:52.731100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.731111] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:43.827 [2024-07-15 07:18:52.731119] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:43.827 ===================================================== 00:14:43.827 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:43.827 ===================================================== 00:14:43.827 Controller Capabilities/Features 00:14:43.827 ================================ 00:14:43.827 Vendor ID: 0000 00:14:43.827 Subsystem Vendor ID: 0000 00:14:43.827 Serial Number: .................... 00:14:43.827 Model Number: ........................................ 00:14:43.827 Firmware Version: 24.09 00:14:43.828 Recommended Arb Burst: 0 00:14:43.828 IEEE OUI Identifier: 00 00 00 00:14:43.828 Multi-path I/O 00:14:43.828 May have multiple subsystem ports: No 00:14:43.828 May have multiple controllers: No 00:14:43.828 Associated with SR-IOV VF: No 00:14:43.828 Max Data Transfer Size: 131072 00:14:43.828 Max Number of Namespaces: 0 00:14:43.828 Max Number of I/O Queues: 1024 00:14:43.828 NVMe Specification Version (VS): 1.3 00:14:43.828 NVMe Specification Version (Identify): 1.3 00:14:43.828 Maximum Queue Entries: 128 00:14:43.828 Contiguous Queues Required: Yes 00:14:43.828 Arbitration Mechanisms Supported 00:14:43.828 Weighted Round Robin: Not Supported 00:14:43.828 Vendor Specific: Not Supported 00:14:43.828 Reset Timeout: 15000 ms 00:14:43.828 Doorbell Stride: 4 bytes 00:14:43.828 NVM Subsystem Reset: Not Supported 00:14:43.828 Command Sets Supported 00:14:43.828 NVM Command Set: Supported 00:14:43.828 Boot Partition: Not Supported 00:14:43.828 Memory Page Size Minimum: 4096 bytes 00:14:43.828 Memory Page Size Maximum: 4096 bytes 00:14:43.828 Persistent Memory Region: Not Supported 00:14:43.828 Optional Asynchronous Events Supported 00:14:43.828 Namespace Attribute Notices: Not Supported 00:14:43.828 Firmware Activation Notices: Not Supported 00:14:43.828 ANA Change Notices: Not Supported 00:14:43.828 PLE Aggregate Log Change Notices: Not Supported 00:14:43.828 LBA Status Info Alert Notices: Not Supported 00:14:43.828 EGE Aggregate Log Change Notices: Not Supported 00:14:43.828 Normal NVM Subsystem Shutdown event: Not Supported 00:14:43.828 Zone Descriptor Change Notices: Not Supported 00:14:43.828 Discovery Log Change Notices: Supported 00:14:43.828 Controller Attributes 00:14:43.828 128-bit Host Identifier: Not Supported 00:14:43.828 Non-Operational Permissive Mode: Not Supported 00:14:43.828 NVM Sets: Not Supported 00:14:43.828 Read Recovery Levels: Not Supported 00:14:43.828 Endurance Groups: Not Supported 00:14:43.828 Predictable Latency Mode: Not Supported 00:14:43.828 Traffic Based Keep ALive: Not Supported 00:14:43.828 Namespace Granularity: Not Supported 00:14:43.828 SQ Associations: Not Supported 00:14:43.828 UUID List: Not Supported 00:14:43.828 Multi-Domain Subsystem: Not Supported 00:14:43.828 Fixed Capacity Management: Not Supported 00:14:43.828 Variable Capacity Management: Not Supported 00:14:43.828 Delete Endurance Group: Not Supported 00:14:43.828 Delete NVM Set: Not Supported 00:14:43.828 Extended LBA Formats Supported: Not Supported 00:14:43.828 Flexible Data Placement Supported: Not Supported 00:14:43.828 00:14:43.828 Controller Memory Buffer Support 00:14:43.828 ================================ 00:14:43.828 Supported: No 00:14:43.828 00:14:43.828 Persistent Memory Region Support 00:14:43.828 ================================ 00:14:43.828 Supported: No 00:14:43.828 00:14:43.828 Admin Command Set Attributes 00:14:43.828 ============================ 00:14:43.828 Security Send/Receive: Not Supported 00:14:43.828 Format NVM: Not Supported 00:14:43.828 Firmware Activate/Download: Not Supported 00:14:43.828 Namespace Management: Not Supported 00:14:43.828 Device Self-Test: Not Supported 00:14:43.828 Directives: Not Supported 00:14:43.828 NVMe-MI: Not Supported 00:14:43.828 Virtualization Management: Not Supported 00:14:43.828 Doorbell Buffer Config: Not Supported 00:14:43.828 Get LBA Status Capability: Not Supported 00:14:43.828 Command & Feature Lockdown Capability: Not Supported 00:14:43.828 Abort Command Limit: 1 00:14:43.828 Async Event Request Limit: 4 00:14:43.828 Number of Firmware Slots: N/A 00:14:43.828 Firmware Slot 1 Read-Only: N/A 00:14:43.828 Firm[2024-07-15 07:18:52.731145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.828 [2024-07-15 07:18:52.731155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.828 [2024-07-15 07:18:52.731159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.828 [2024-07-15 07:18:52.731163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcddf40) on tqpair=0xc9c2c0 00:14:43.828 ware Activation Without Reset: N/A 00:14:43.828 Multiple Update Detection Support: N/A 00:14:43.828 Firmware Update Granularity: No Information Provided 00:14:43.828 Per-Namespace SMART Log: No 00:14:43.828 Asymmetric Namespace Access Log Page: Not Supported 00:14:43.828 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:43.828 Command Effects Log Page: Not Supported 00:14:43.828 Get Log Page Extended Data: Supported 00:14:43.828 Telemetry Log Pages: Not Supported 00:14:43.828 Persistent Event Log Pages: Not Supported 00:14:43.828 Supported Log Pages Log Page: May Support 00:14:43.828 Commands Supported & Effects Log Page: Not Supported 00:14:43.828 Feature Identifiers & Effects Log Page:May Support 00:14:43.828 NVMe-MI Commands & Effects Log Page: May Support 00:14:43.828 Data Area 4 for Telemetry Log: Not Supported 00:14:43.828 Error Log Page Entries Supported: 128 00:14:43.828 Keep Alive: Not Supported 00:14:43.828 00:14:43.828 NVM Command Set Attributes 00:14:43.828 ========================== 00:14:43.828 Submission Queue Entry Size 00:14:43.828 Max: 1 00:14:43.828 Min: 1 00:14:43.828 Completion Queue Entry Size 00:14:43.828 Max: 1 00:14:43.828 Min: 1 00:14:43.828 Number of Namespaces: 0 00:14:43.828 Compare Command: Not Supported 00:14:43.828 Write Uncorrectable Command: Not Supported 00:14:43.828 Dataset Management Command: Not Supported 00:14:43.828 Write Zeroes Command: Not Supported 00:14:43.828 Set Features Save Field: Not Supported 00:14:43.828 Reservations: Not Supported 00:14:43.828 Timestamp: Not Supported 00:14:43.828 Copy: Not Supported 00:14:43.828 Volatile Write Cache: Not Present 00:14:43.828 Atomic Write Unit (Normal): 1 00:14:43.828 Atomic Write Unit (PFail): 1 00:14:43.828 Atomic Compare & Write Unit: 1 00:14:43.828 Fused Compare & Write: Supported 00:14:43.828 Scatter-Gather List 00:14:43.828 SGL Command Set: Supported 00:14:43.828 SGL Keyed: Supported 00:14:43.828 SGL Bit Bucket Descriptor: Not Supported 00:14:43.828 SGL Metadata Pointer: Not Supported 00:14:43.828 Oversized SGL: Not Supported 00:14:43.828 SGL Metadata Address: Not Supported 00:14:43.828 SGL Offset: Supported 00:14:43.828 Transport SGL Data Block: Not Supported 00:14:43.828 Replay Protected Memory Block: Not Supported 00:14:43.828 00:14:43.828 Firmware Slot Information 00:14:43.828 ========================= 00:14:43.828 Active slot: 0 00:14:43.828 00:14:43.828 00:14:43.828 Error Log 00:14:43.828 ========= 00:14:43.828 00:14:43.828 Active Namespaces 00:14:43.828 ================= 00:14:43.828 Discovery Log Page 00:14:43.828 ================== 00:14:43.828 Generation Counter: 2 00:14:43.828 Number of Records: 2 00:14:43.828 Record Format: 0 00:14:43.828 00:14:43.828 Discovery Log Entry 0 00:14:43.828 ---------------------- 00:14:43.828 Transport Type: 3 (TCP) 00:14:43.828 Address Family: 1 (IPv4) 00:14:43.828 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:43.828 Entry Flags: 00:14:43.828 Duplicate Returned Information: 1 00:14:43.828 Explicit Persistent Connection Support for Discovery: 1 00:14:43.828 Transport Requirements: 00:14:43.828 Secure Channel: Not Required 00:14:43.828 Port ID: 0 (0x0000) 00:14:43.828 Controller ID: 65535 (0xffff) 00:14:43.828 Admin Max SQ Size: 128 00:14:43.828 Transport Service Identifier: 4420 00:14:43.828 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:43.828 Transport Address: 10.0.0.2 00:14:43.828 Discovery Log Entry 1 00:14:43.828 ---------------------- 00:14:43.828 Transport Type: 3 (TCP) 00:14:43.828 Address Family: 1 (IPv4) 00:14:43.828 Subsystem Type: 2 (NVM Subsystem) 00:14:43.828 Entry Flags: 00:14:43.828 Duplicate Returned Information: 0 00:14:43.828 Explicit Persistent Connection Support for Discovery: 0 00:14:43.828 Transport Requirements: 00:14:43.828 Secure Channel: Not Required 00:14:43.828 Port ID: 0 (0x0000) 00:14:43.828 Controller ID: 65535 (0xffff) 00:14:43.828 Admin Max SQ Size: 128 00:14:43.828 Transport Service Identifier: 4420 00:14:43.828 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:43.828 Transport Address: 10.0.0.2 [2024-07-15 07:18:52.731265] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:43.828 [2024-07-15 07:18:52.731279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdd940) on tqpair=0xc9c2c0 00:14:43.828 [2024-07-15 07:18:52.731287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.828 [2024-07-15 07:18:52.731293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcddac0) on tqpair=0xc9c2c0 00:14:43.828 [2024-07-15 07:18:52.731298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.828 [2024-07-15 07:18:52.731304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcddc40) on tqpair=0xc9c2c0 00:14:43.828 [2024-07-15 07:18:52.731309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.828 [2024-07-15 07:18:52.731314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.828 [2024-07-15 07:18:52.731319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.828 [2024-07-15 07:18:52.731329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.828 [2024-07-15 07:18:52.731335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.828 [2024-07-15 07:18:52.731339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.829 [2024-07-15 07:18:52.731347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.829 [2024-07-15 07:18:52.731371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.829 [2024-07-15 07:18:52.731423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.829 [2024-07-15 07:18:52.731430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.829 [2024-07-15 07:18:52.731434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.829 [2024-07-15 07:18:52.731447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.829 [2024-07-15 07:18:52.731463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.829 [2024-07-15 07:18:52.731485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.829 [2024-07-15 07:18:52.731556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.829 [2024-07-15 07:18:52.731562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.829 [2024-07-15 07:18:52.731566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.829 [2024-07-15 07:18:52.731576] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:43.829 [2024-07-15 07:18:52.731582] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:43.829 [2024-07-15 07:18:52.731593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.829 [2024-07-15 07:18:52.731610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.829 [2024-07-15 07:18:52.731628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.829 [2024-07-15 07:18:52.731682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.829 [2024-07-15 07:18:52.731695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.829 [2024-07-15 07:18:52.731700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.829 [2024-07-15 07:18:52.731717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.829 [2024-07-15 07:18:52.731734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.829 [2024-07-15 07:18:52.731753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.829 [2024-07-15 07:18:52.731804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.829 [2024-07-15 07:18:52.731811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.829 [2024-07-15 07:18:52.731815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.829 [2024-07-15 07:18:52.731830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.829 [2024-07-15 07:18:52.731847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.829 [2024-07-15 07:18:52.731864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.829 [2024-07-15 07:18:52.731914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.829 [2024-07-15 07:18:52.731921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.829 [2024-07-15 07:18:52.731925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.829 [2024-07-15 07:18:52.731940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.731949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.829 [2024-07-15 07:18:52.731957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.829 [2024-07-15 07:18:52.731974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.829 [2024-07-15 07:18:52.732021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.829 [2024-07-15 07:18:52.732027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.829 [2024-07-15 07:18:52.732031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.732036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.829 [2024-07-15 07:18:52.732046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.732051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.732055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc9c2c0) 00:14:43.829 [2024-07-15 07:18:52.732063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.829 [2024-07-15 07:18:52.736102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdddc0, cid 3, qid 0 00:14:43.829 [2024-07-15 07:18:52.736186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:43.829 [2024-07-15 07:18:52.736197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:43.829 [2024-07-15 07:18:52.736201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:43.829 [2024-07-15 07:18:52.736206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdddc0) on tqpair=0xc9c2c0 00:14:43.829 [2024-07-15 07:18:52.736216] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:43.829 00:14:43.829 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:44.119 [2024-07-15 07:18:52.780538] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:44.119 [2024-07-15 07:18:52.780591] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74682 ] 00:14:44.119 [2024-07-15 07:18:52.920676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:44.119 [2024-07-15 07:18:52.920742] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:44.119 [2024-07-15 07:18:52.920749] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:44.119 [2024-07-15 07:18:52.920763] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:44.119 [2024-07-15 07:18:52.920771] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:44.119 [2024-07-15 07:18:52.920917] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:44.119 [2024-07-15 07:18:52.920969] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd002c0 0 00:14:44.119 [2024-07-15 07:18:52.925100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:44.119 [2024-07-15 07:18:52.925124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:44.119 [2024-07-15 07:18:52.925130] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:44.119 [2024-07-15 07:18:52.925135] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:44.119 [2024-07-15 07:18:52.925179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.925187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.925191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.119 [2024-07-15 07:18:52.925206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:44.119 [2024-07-15 07:18:52.925240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.119 [2024-07-15 07:18:52.933096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.119 [2024-07-15 07:18:52.933120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.119 [2024-07-15 07:18:52.933126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.933132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.119 [2024-07-15 07:18:52.933148] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:44.119 [2024-07-15 07:18:52.933158] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:44.119 [2024-07-15 07:18:52.933165] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:44.119 [2024-07-15 07:18:52.933184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.933190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.933195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.119 [2024-07-15 07:18:52.933206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.119 [2024-07-15 07:18:52.933237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.119 [2024-07-15 07:18:52.933321] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.119 [2024-07-15 07:18:52.933335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.119 [2024-07-15 07:18:52.933343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.933351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.119 [2024-07-15 07:18:52.933361] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:44.119 [2024-07-15 07:18:52.933371] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:44.119 [2024-07-15 07:18:52.933380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.933385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.119 [2024-07-15 07:18:52.933389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.119 [2024-07-15 07:18:52.933398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.120 [2024-07-15 07:18:52.933423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.120 [2024-07-15 07:18:52.933499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.120 [2024-07-15 07:18:52.933512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.120 [2024-07-15 07:18:52.933517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.120 [2024-07-15 07:18:52.933529] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:44.120 [2024-07-15 07:18:52.933538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:44.120 [2024-07-15 07:18:52.933547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.120 [2024-07-15 07:18:52.933563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.120 [2024-07-15 07:18:52.933584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.120 [2024-07-15 07:18:52.933641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.120 [2024-07-15 07:18:52.933653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.120 [2024-07-15 07:18:52.933658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.120 [2024-07-15 07:18:52.933668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:44.120 [2024-07-15 07:18:52.933680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.120 [2024-07-15 07:18:52.933697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.120 [2024-07-15 07:18:52.933716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.120 [2024-07-15 07:18:52.933761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.120 [2024-07-15 07:18:52.933768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.120 [2024-07-15 07:18:52.933772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.120 [2024-07-15 07:18:52.933783] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:44.120 [2024-07-15 07:18:52.933788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:44.120 [2024-07-15 07:18:52.933797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:44.120 [2024-07-15 07:18:52.933904] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:44.120 [2024-07-15 07:18:52.933916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:44.120 [2024-07-15 07:18:52.933928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.933937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.120 [2024-07-15 07:18:52.933946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.120 [2024-07-15 07:18:52.933968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.120 [2024-07-15 07:18:52.934020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.120 [2024-07-15 07:18:52.934027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.120 [2024-07-15 07:18:52.934031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.120 [2024-07-15 07:18:52.934042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:44.120 [2024-07-15 07:18:52.934053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.120 [2024-07-15 07:18:52.934070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.120 [2024-07-15 07:18:52.934104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.120 [2024-07-15 07:18:52.934174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.120 [2024-07-15 07:18:52.934181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.120 [2024-07-15 07:18:52.934185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.120 [2024-07-15 07:18:52.934195] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:44.120 [2024-07-15 07:18:52.934201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:44.120 [2024-07-15 07:18:52.934210] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:44.120 [2024-07-15 07:18:52.934222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:44.120 [2024-07-15 07:18:52.934234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.120 [2024-07-15 07:18:52.934247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.120 [2024-07-15 07:18:52.934266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.120 [2024-07-15 07:18:52.934371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.120 [2024-07-15 07:18:52.934387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.120 [2024-07-15 07:18:52.934392] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934396] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=4096, cccid=0 00:14:44.120 [2024-07-15 07:18:52.934402] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41940) on tqpair(0xd002c0): expected_datao=0, payload_size=4096 00:14:44.120 [2024-07-15 07:18:52.934408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934417] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934422] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.120 [2024-07-15 07:18:52.934439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.120 [2024-07-15 07:18:52.934443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.120 [2024-07-15 07:18:52.934448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.121 [2024-07-15 07:18:52.934458] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:44.121 [2024-07-15 07:18:52.934464] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:44.121 [2024-07-15 07:18:52.934469] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:44.121 [2024-07-15 07:18:52.934474] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:44.121 [2024-07-15 07:18:52.934480] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:44.121 [2024-07-15 07:18:52.934486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.934496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.934504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.934522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:44.121 [2024-07-15 07:18:52.934543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.121 [2024-07-15 07:18:52.934623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.121 [2024-07-15 07:18:52.934631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.121 [2024-07-15 07:18:52.934635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.121 [2024-07-15 07:18:52.934648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.934664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.121 [2024-07-15 07:18:52.934671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.934686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.121 [2024-07-15 07:18:52.934695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.934710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.121 [2024-07-15 07:18:52.934717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.934732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.121 [2024-07-15 07:18:52.934738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.934752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.934761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.934773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.121 [2024-07-15 07:18:52.934798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41940, cid 0, qid 0 00:14:44.121 [2024-07-15 07:18:52.934809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41ac0, cid 1, qid 0 00:14:44.121 [2024-07-15 07:18:52.934815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41c40, cid 2, qid 0 00:14:44.121 [2024-07-15 07:18:52.934820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.121 [2024-07-15 07:18:52.934826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f40, cid 4, qid 0 00:14:44.121 [2024-07-15 07:18:52.934909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.121 [2024-07-15 07:18:52.934916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.121 [2024-07-15 07:18:52.934921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f40) on tqpair=0xd002c0 00:14:44.121 [2024-07-15 07:18:52.934931] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:44.121 [2024-07-15 07:18:52.934942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.934952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.934959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.934966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.934975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.934984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:44.121 [2024-07-15 07:18:52.935004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f40, cid 4, qid 0 00:14:44.121 [2024-07-15 07:18:52.935057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.121 [2024-07-15 07:18:52.935065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.121 [2024-07-15 07:18:52.935069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.935086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f40) on tqpair=0xd002c0 00:14:44.121 [2024-07-15 07:18:52.935153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.935165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:44.121 [2024-07-15 07:18:52.935174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.121 [2024-07-15 07:18:52.935179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd002c0) 00:14:44.121 [2024-07-15 07:18:52.935187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.121 ===================================================== 00:14:44.121 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.121 ===================================================== 00:14:44.121 Controller Capabilities/Features 00:14:44.121 ================================ 00:14:44.121 Vendor ID: 8086 00:14:44.121 Subsystem Vendor ID: 8086 00:14:44.121 Serial Number: SPDK00000000000001 00:14:44.121 Model Number: SPDK bdev Controller 00:14:44.121 Firmware Version: 24.09 00:14:44.122 Recommended Arb Burst: 6 00:14:44.122 IEEE OUI Identifier: e4 d2 5c 00:14:44.122 Multi-path I/O 00:14:44.122 May have multiple subsystem ports: Yes 00:14:44.122 May have multiple controllers: Yes 00:14:44.122 Associated with SR-IOV VF: No 00:14:44.122 Max Data Transfer Size: 131072 00:14:44.122 Max Number of Namespaces: 32 00:14:44.122 Max Number of I/O Queues: 127 00:14:44.122 NVMe Specification Version (VS): 1.3 00:14:44.122 NVMe Specification Version (Identify): 1.3 00:14:44.122 Maximum Queue Entries: 128 00:14:44.122 Contiguous Queues Required: Yes 00:14:44.122 Arbitration Mechanisms Supported 00:14:44.122 Weighted Round Robin: Not Supported 00:14:44.122 Vendor Specific: Not Supported 00:14:44.122 Reset Timeout: 15000 ms 00:14:44.122 Doorbell Stride: 4 bytes 00:14:44.122 NVM Subsystem Reset: Not Supported 00:14:44.122 Command Sets Supported 00:14:44.122 NVM Command Set: Supported 00:14:44.122 Boot Partition: Not Supported 00:14:44.122 Memory Page Size Minimum: 4096 bytes 00:14:44.122 Memory Page Size Maximum: 4096 bytes 00:14:44.122 Persistent Memory Region: Not Supported 00:14:44.122 Optional Asynchronous Events Supported 00:14:44.122 Namespace Attribute Notices: Supported 00:14:44.122 Firmware Activation Notices: Not Supported 00:14:44.122 ANA Change Notices: Not Supported 00:14:44.122 PLE Aggregate Log Change Notices: Not Supported 00:14:44.122 LBA Status Info Alert Notices: Not Supported 00:14:44.122 EGE Aggregate Log Change Notices: Not Supported 00:14:44.122 Normal NVM Subsystem Shutdown event: Not Supported 00:14:44.122 Zone Descriptor Change Notices: Not Supported 00:14:44.122 Discovery Log Change Notices: Not Supported 00:14:44.122 Controller Attributes 00:14:44.122 128-bit Host Identifier: Supported 00:14:44.122 Non-Operational Permissive Mode: Not Supported 00:14:44.122 NVM Sets: Not Supported 00:14:44.122 Read Recovery Levels: Not Supported 00:14:44.122 Endurance Groups: Not Supported 00:14:44.122 Predictable Latency Mode: Not Supported 00:14:44.122 Traffic Based Keep ALive: Not Supported 00:14:44.122 Namespace Granularity: Not Supported 00:14:44.122 SQ Associations: Not Supported 00:14:44.122 UUID List: Not Supported 00:14:44.122 Multi-Domain Subsystem: Not Supported 00:14:44.122 Fixed Capacity Management: Not Supported 00:14:44.122 Variable Capacity Management: Not Supported 00:14:44.122 Delete Endurance Group: Not Supported 00:14:44.122 Delete NVM Set: Not Supported 00:14:44.122 Extended LBA Formats Supported: Not Supported 00:14:44.122 Flexible Data Placement Supported: Not Supported 00:14:44.122 00:14:44.122 Controller Memory Buffer Support 00:14:44.122 ================================ 00:14:44.122 Supported: No 00:14:44.122 00:14:44.122 Persistent Memory Region Support 00:14:44.122 ================================ 00:14:44.122 Supported: No 00:14:44.122 00:14:44.122 Admin Command Set Attributes 00:14:44.122 ============================ 00:14:44.122 Security Send/Receive: Not Supported 00:14:44.122 Format NVM: Not Supported 00:14:44.122 Firmware Activate/Download: Not Supported 00:14:44.122 Namespace Management: Not Supported 00:14:44.122 Device Self-Test: Not Supported 00:14:44.122 Directives: Not Supported 00:14:44.122 NVMe-MI: Not Supported 00:14:44.122 Virtualization Management: Not Supported 00:14:44.122 Doorbell Buffer Config: Not Supported 00:14:44.122 Get LBA Status Capability: Not Supported 00:14:44.122 Command & Feature Lockdown Capability: Not Supported 00:14:44.122 Abort Command Limit: 4 00:14:44.122 Async Event Request Limit: 4 00:14:44.122 Number of Firmware Slots: N/A 00:14:44.122 Firmware Slot 1 Read-Only: N/A 00:14:44.122 Firmware Activation Without Reset: [2024-07-15 07:18:52.935208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f40, cid 4, qid 0 00:14:44.122 [2024-07-15 07:18:52.935277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.122 [2024-07-15 07:18:52.935286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.122 [2024-07-15 07:18:52.935291] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.122 [2024-07-15 07:18:52.935295] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=4096, cccid=4 00:14:44.122 [2024-07-15 07:18:52.935301] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41f40) on tqpair(0xd002c0): expected_datao=0, payload_size=4096 00:14:44.122 [2024-07-15 07:18:52.935306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.122 [2024-07-15 07:18:52.935314] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.122 [2024-07-15 07:18:52.935318] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.122 [2024-07-15 07:18:52.935327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.122 [2024-07-15 07:18:52.935334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.122 [2024-07-15 07:18:52.935338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.122 [2024-07-15 07:18:52.935343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f40) on tqpair=0xd002c0 00:14:44.122 [2024-07-15 07:18:52.935360] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:44.122 [2024-07-15 07:18:52.935372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:44.122 [2024-07-15 07:18:52.935383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:44.122 [2024-07-15 07:18:52.935392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.122 [2024-07-15 07:18:52.935397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd002c0) 00:14:44.122 [2024-07-15 07:18:52.935405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.122 [2024-07-15 07:18:52.935427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f40, cid 4, qid 0 00:14:44.122 [2024-07-15 07:18:52.935496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.122 [2024-07-15 07:18:52.935503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.122 [2024-07-15 07:18:52.935508] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.122 [2024-07-15 07:18:52.935512] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=4096, cccid=4 00:14:44.123 [2024-07-15 07:18:52.935517] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41f40) on tqpair(0xd002c0): expected_datao=0, payload_size=4096 00:14:44.123 [2024-07-15 07:18:52.935523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935532] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935540] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.123 [2024-07-15 07:18:52.935560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.123 [2024-07-15 07:18:52.935564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f40) on tqpair=0xd002c0 00:14:44.123 [2024-07-15 07:18:52.935585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd002c0) 00:14:44.123 [2024-07-15 07:18:52.935621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.123 [2024-07-15 07:18:52.935642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f40, cid 4, qid 0 00:14:44.123 [2024-07-15 07:18:52.935703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.123 [2024-07-15 07:18:52.935710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.123 [2024-07-15 07:18:52.935714] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935719] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=4096, cccid=4 00:14:44.123 [2024-07-15 07:18:52.935724] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41f40) on tqpair(0xd002c0): expected_datao=0, payload_size=4096 00:14:44.123 [2024-07-15 07:18:52.935729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935737] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935741] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.123 [2024-07-15 07:18:52.935757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.123 [2024-07-15 07:18:52.935761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f40) on tqpair=0xd002c0 00:14:44.123 [2024-07-15 07:18:52.935775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935821] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:44.123 [2024-07-15 07:18:52.935826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:44.123 [2024-07-15 07:18:52.935832] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:44.123 [2024-07-15 07:18:52.935850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd002c0) 00:14:44.123 [2024-07-15 07:18:52.935863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.123 [2024-07-15 07:18:52.935871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd002c0) 00:14:44.123 [2024-07-15 07:18:52.935886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.123 [2024-07-15 07:18:52.935911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f40, cid 4, qid 0 00:14:44.123 [2024-07-15 07:18:52.935920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd420c0, cid 5, qid 0 00:14:44.123 [2024-07-15 07:18:52.935981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.123 [2024-07-15 07:18:52.935988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.123 [2024-07-15 07:18:52.935992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.935997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f40) on tqpair=0xd002c0 00:14:44.123 [2024-07-15 07:18:52.936004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.123 [2024-07-15 07:18:52.936011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.123 [2024-07-15 07:18:52.936015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.936019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd420c0) on tqpair=0xd002c0 00:14:44.123 [2024-07-15 07:18:52.936030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.936035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd002c0) 00:14:44.123 [2024-07-15 07:18:52.936043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.123 [2024-07-15 07:18:52.936067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd420c0, cid 5, qid 0 00:14:44.123 [2024-07-15 07:18:52.936149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.123 [2024-07-15 07:18:52.936158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.123 [2024-07-15 07:18:52.936162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.936167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd420c0) on tqpair=0xd002c0 00:14:44.123 [2024-07-15 07:18:52.936178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.936183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd002c0) 00:14:44.123 [2024-07-15 07:18:52.936191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.123 [2024-07-15 07:18:52.936211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd420c0, cid 5, qid 0 00:14:44.123 [2024-07-15 07:18:52.936259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.123 [2024-07-15 07:18:52.936267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.123 [2024-07-15 07:18:52.936271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.936275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd420c0) on tqpair=0xd002c0 00:14:44.123 [2024-07-15 07:18:52.936286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.123 [2024-07-15 07:18:52.936291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd002c0) 00:14:44.123 [2024-07-15 07:18:52.936299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.123 [2024-07-15 07:18:52.936316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd420c0, cid 5, qid 0 00:14:44.123 [2024-07-15 07:18:52.936365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.123 [2024-07-15 07:18:52.936373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.123 [2024-07-15 07:18:52.936377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd420c0) on tqpair=0xd002c0 00:14:44.124 [2024-07-15 07:18:52.936401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd002c0) 00:14:44.124 [2024-07-15 07:18:52.936415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.124 [2024-07-15 07:18:52.936424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd002c0) 00:14:44.124 [2024-07-15 07:18:52.936436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.124 [2024-07-15 07:18:52.936444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd002c0) 00:14:44.124 [2024-07-15 07:18:52.936458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.124 [2024-07-15 07:18:52.936474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd002c0) 00:14:44.124 [2024-07-15 07:18:52.936486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.124 [2024-07-15 07:18:52.936508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd420c0, cid 5, qid 0 00:14:44.124 [2024-07-15 07:18:52.936516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41f40, cid 4, qid 0 00:14:44.124 [2024-07-15 07:18:52.936521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd42240, cid 6, qid 0 00:14:44.124 [2024-07-15 07:18:52.936527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd423c0, cid 7, qid 0 00:14:44.124 [2024-07-15 07:18:52.936693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.124 [2024-07-15 07:18:52.936700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.124 [2024-07-15 07:18:52.936705] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936709] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=8192, cccid=5 00:14:44.124 [2024-07-15 07:18:52.936714] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd420c0) on tqpair(0xd002c0): expected_datao=0, payload_size=8192 00:14:44.124 [2024-07-15 07:18:52.936720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936737] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936743] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.124 [2024-07-15 07:18:52.936755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.124 [2024-07-15 07:18:52.936759] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936764] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=512, cccid=4 00:14:44.124 [2024-07-15 07:18:52.936769] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd41f40) on tqpair(0xd002c0): expected_datao=0, payload_size=512 00:14:44.124 [2024-07-15 07:18:52.936774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936781] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936785] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.124 [2024-07-15 07:18:52.936797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.124 [2024-07-15 07:18:52.936801] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936805] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=512, cccid=6 00:14:44.124 [2024-07-15 07:18:52.936810] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd42240) on tqpair(0xd002c0): expected_datao=0, payload_size=512 00:14:44.124 [2024-07-15 07:18:52.936816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936823] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936827] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:44.124 [2024-07-15 07:18:52.936843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:44.124 [2024-07-15 07:18:52.936850] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936854] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd002c0): datao=0, datal=4096, cccid=7 00:14:44.124 [2024-07-15 07:18:52.936860] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd423c0) on tqpair(0xd002c0): expected_datao=0, payload_size=4096 00:14:44.124 [2024-07-15 07:18:52.936865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936872] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936876] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.124 [2024-07-15 07:18:52.936892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.124 [2024-07-15 07:18:52.936896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd420c0) on tqpair=0xd002c0 00:14:44.124 [2024-07-15 07:18:52.936919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.124 [2024-07-15 07:18:52.936927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.124 [2024-07-15 07:18:52.936931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41f40) on tqpair=0xd002c0 00:14:44.124 [2024-07-15 07:18:52.936949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.124 [2024-07-15 07:18:52.936955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.124 [2024-07-15 07:18:52.936959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd42240) on tqpair=0xd002c0 00:14:44.124 [2024-07-15 07:18:52.936972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.124 [2024-07-15 07:18:52.936979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.124 [2024-07-15 07:18:52.936983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.124 [2024-07-15 07:18:52.936987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd423c0) on tqpair=0xd002c0 00:14:44.124 N/A 00:14:44.124 Multiple Update Detection Support: N/A 00:14:44.124 Firmware Update Granularity: No Information Provided 00:14:44.124 Per-Namespace SMART Log: No 00:14:44.124 Asymmetric Namespace Access Log Page: Not Supported 00:14:44.124 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:44.124 Command Effects Log Page: Supported 00:14:44.124 Get Log Page Extended Data: Supported 00:14:44.124 Telemetry Log Pages: Not Supported 00:14:44.124 Persistent Event Log Pages: Not Supported 00:14:44.124 Supported Log Pages Log Page: May Support 00:14:44.124 Commands Supported & Effects Log Page: Not Supported 00:14:44.124 Feature Identifiers & Effects Log Page:May Support 00:14:44.124 NVMe-MI Commands & Effects Log Page: May Support 00:14:44.124 Data Area 4 for Telemetry Log: Not Supported 00:14:44.124 Error Log Page Entries Supported: 128 00:14:44.124 Keep Alive: Supported 00:14:44.124 Keep Alive Granularity: 10000 ms 00:14:44.124 00:14:44.124 NVM Command Set Attributes 00:14:44.124 ========================== 00:14:44.124 Submission Queue Entry Size 00:14:44.124 Max: 64 00:14:44.124 Min: 64 00:14:44.124 Completion Queue Entry Size 00:14:44.124 Max: 16 00:14:44.124 Min: 16 00:14:44.124 Number of Namespaces: 32 00:14:44.124 Compare Command: Supported 00:14:44.124 Write Uncorrectable Command: Not Supported 00:14:44.124 Dataset Management Command: Supported 00:14:44.124 Write Zeroes Command: Supported 00:14:44.124 Set Features Save Field: Not Supported 00:14:44.124 Reservations: Supported 00:14:44.124 Timestamp: Not Supported 00:14:44.124 Copy: Supported 00:14:44.124 Volatile Write Cache: Present 00:14:44.124 Atomic Write Unit (Normal): 1 00:14:44.124 Atomic Write Unit (PFail): 1 00:14:44.124 Atomic Compare & Write Unit: 1 00:14:44.124 Fused Compare & Write: Supported 00:14:44.124 Scatter-Gather List 00:14:44.124 SGL Command Set: Supported 00:14:44.124 SGL Keyed: Supported 00:14:44.124 SGL Bit Bucket Descriptor: Not Supported 00:14:44.124 SGL Metadata Pointer: Not Supported 00:14:44.124 Oversized SGL: Not Supported 00:14:44.124 SGL Metadata Address: Not Supported 00:14:44.124 SGL Offset: Supported 00:14:44.124 Transport SGL Data Block: Not Supported 00:14:44.124 Replay Protected Memory Block: Not Supported 00:14:44.124 00:14:44.124 Firmware Slot Information 00:14:44.124 ========================= 00:14:44.124 Active slot: 1 00:14:44.124 Slot 1 Firmware Revision: 24.09 00:14:44.124 00:14:44.124 00:14:44.124 Commands Supported and Effects 00:14:44.124 ============================== 00:14:44.124 Admin Commands 00:14:44.125 -------------- 00:14:44.125 Get Log Page (02h): Supported 00:14:44.125 Identify (06h): Supported 00:14:44.125 Abort (08h): Supported 00:14:44.125 Set Features (09h): Supported 00:14:44.125 Get Features (0Ah): Supported 00:14:44.125 Asynchronous Event Request (0Ch): Supported 00:14:44.125 Keep Alive (18h): Supported 00:14:44.125 I/O Commands 00:14:44.125 ------------ 00:14:44.125 Flush (00h): Supported LBA-Change 00:14:44.125 Write (01h): Supported LBA-Change 00:14:44.125 Read (02h): Supported 00:14:44.125 Compare (05h): Supported 00:14:44.125 Write Zeroes (08h): Supported LBA-Change 00:14:44.125 Dataset Management (09h): Supported LBA-Change 00:14:44.125 Copy (19h): Supported LBA-Change 00:14:44.125 00:14:44.125 Error Log 00:14:44.125 ========= 00:14:44.125 00:14:44.125 Arbitration 00:14:44.125 =========== 00:14:44.125 Arbitration Burst: 1 00:14:44.125 00:14:44.125 Power Management 00:14:44.125 ================ 00:14:44.125 Number of Power States: 1 00:14:44.125 Current Power State: Power State #0 00:14:44.125 Power State #0: 00:14:44.125 Max Power: 0.00 W 00:14:44.125 Non-Operational State: Operational 00:14:44.125 Entry Latency: Not Reported 00:14:44.125 Exit Latency: Not Reported 00:14:44.125 Relative Read Throughput: 0 00:14:44.125 Relative Read Latency: 0 00:14:44.125 Relative Write Throughput: 0 00:14:44.125 Relative Write Latency: 0 00:14:44.125 Idle Power: Not Reported 00:14:44.125 Active Power: Not Reported 00:14:44.125 Non-Operational Permissive Mode: Not Supported 00:14:44.125 00:14:44.125 Health Information 00:14:44.125 ================== 00:14:44.125 Critical Warnings: 00:14:44.125 Available Spare Space: OK 00:14:44.125 Temperature: OK 00:14:44.125 Device Reliability: OK 00:14:44.125 Read Only: No 00:14:44.125 Volatile Memory Backup: OK 00:14:44.125 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:44.125 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:44.125 Available Spare: 0% 00:14:44.125 Available Spare Threshold: 0% 00:14:44.125 Life Percentage Used:[2024-07-15 07:18:52.941124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd002c0) 00:14:44.125 [2024-07-15 07:18:52.941146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.125 [2024-07-15 07:18:52.941176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd423c0, cid 7, qid 0 00:14:44.125 [2024-07-15 07:18:52.941243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.125 [2024-07-15 07:18:52.941251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.125 [2024-07-15 07:18:52.941256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd423c0) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941315] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:44.125 [2024-07-15 07:18:52.941338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41940) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.125 [2024-07-15 07:18:52.941354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41ac0) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.125 [2024-07-15 07:18:52.941365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41c40) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.125 [2024-07-15 07:18:52.941376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.125 [2024-07-15 07:18:52.941391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.125 [2024-07-15 07:18:52.941409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.125 [2024-07-15 07:18:52.941434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.125 [2024-07-15 07:18:52.941486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.125 [2024-07-15 07:18:52.941494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.125 [2024-07-15 07:18:52.941498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.125 [2024-07-15 07:18:52.941528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.125 [2024-07-15 07:18:52.941550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.125 [2024-07-15 07:18:52.941620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.125 [2024-07-15 07:18:52.941637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.125 [2024-07-15 07:18:52.941642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941653] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:44.125 [2024-07-15 07:18:52.941658] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:44.125 [2024-07-15 07:18:52.941670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.125 [2024-07-15 07:18:52.941688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.125 [2024-07-15 07:18:52.941709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.125 [2024-07-15 07:18:52.941768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.125 [2024-07-15 07:18:52.941775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.125 [2024-07-15 07:18:52.941780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.125 [2024-07-15 07:18:52.941814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.125 [2024-07-15 07:18:52.941832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.125 [2024-07-15 07:18:52.941908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.125 [2024-07-15 07:18:52.941922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.125 [2024-07-15 07:18:52.941931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.125 [2024-07-15 07:18:52.941948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.125 [2024-07-15 07:18:52.941954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.941958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.941966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.941986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942193] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.942907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.942915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.942920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.942936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.942945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.942953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.942973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.943041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.943064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.943086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.943106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.943124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.943152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.943209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.943218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.943222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.943238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.943256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.943276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.943322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.943332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.943337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.943353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.943370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.943388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.943432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.943439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.943443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.126 [2024-07-15 07:18:52.943459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.126 [2024-07-15 07:18:52.943476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.126 [2024-07-15 07:18:52.943493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.126 [2024-07-15 07:18:52.943565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.126 [2024-07-15 07:18:52.943576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.126 [2024-07-15 07:18:52.943581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.126 [2024-07-15 07:18:52.943585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.943597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.943614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.943632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.943682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.943689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.943693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.943709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.943725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.943743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.943790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.943801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.943806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.943822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.943839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.943857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.943903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.943911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.943915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.943933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.943950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.943959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.943978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.944882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.944886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.944902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.944912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.944920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.944938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.944997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.945008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.945013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.945018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.127 [2024-07-15 07:18:52.945029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.945034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.127 [2024-07-15 07:18:52.945038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.127 [2024-07-15 07:18:52.945046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.127 [2024-07-15 07:18:52.945064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.127 [2024-07-15 07:18:52.949095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.127 [2024-07-15 07:18:52.949116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.127 [2024-07-15 07:18:52.949122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.128 [2024-07-15 07:18:52.949127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.128 [2024-07-15 07:18:52.949142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:44.128 [2024-07-15 07:18:52.949148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:44.128 [2024-07-15 07:18:52.949152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd002c0) 00:14:44.128 [2024-07-15 07:18:52.949162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.128 [2024-07-15 07:18:52.949189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd41dc0, cid 3, qid 0 00:14:44.128 [2024-07-15 07:18:52.949248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:44.128 [2024-07-15 07:18:52.949256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:44.128 [2024-07-15 07:18:52.949260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:44.128 [2024-07-15 07:18:52.949264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd41dc0) on tqpair=0xd002c0 00:14:44.128 [2024-07-15 07:18:52.949273] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:44.128 0% 00:14:44.128 Data Units Read: 0 00:14:44.128 Data Units Written: 0 00:14:44.128 Host Read Commands: 0 00:14:44.128 Host Write Commands: 0 00:14:44.128 Controller Busy Time: 0 minutes 00:14:44.128 Power Cycles: 0 00:14:44.128 Power On Hours: 0 hours 00:14:44.128 Unsafe Shutdowns: 0 00:14:44.128 Unrecoverable Media Errors: 0 00:14:44.128 Lifetime Error Log Entries: 0 00:14:44.128 Warning Temperature Time: 0 minutes 00:14:44.128 Critical Temperature Time: 0 minutes 00:14:44.128 00:14:44.128 Number of Queues 00:14:44.128 ================ 00:14:44.128 Number of I/O Submission Queues: 127 00:14:44.128 Number of I/O Completion Queues: 127 00:14:44.128 00:14:44.128 Active Namespaces 00:14:44.128 ================= 00:14:44.128 Namespace ID:1 00:14:44.128 Error Recovery Timeout: Unlimited 00:14:44.128 Command Set Identifier: NVM (00h) 00:14:44.128 Deallocate: Supported 00:14:44.128 Deallocated/Unwritten Error: Not Supported 00:14:44.128 Deallocated Read Value: Unknown 00:14:44.128 Deallocate in Write Zeroes: Not Supported 00:14:44.128 Deallocated Guard Field: 0xFFFF 00:14:44.128 Flush: Supported 00:14:44.128 Reservation: Supported 00:14:44.128 Namespace Sharing Capabilities: Multiple Controllers 00:14:44.128 Size (in LBAs): 131072 (0GiB) 00:14:44.128 Capacity (in LBAs): 131072 (0GiB) 00:14:44.128 Utilization (in LBAs): 131072 (0GiB) 00:14:44.128 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:44.128 EUI64: ABCDEF0123456789 00:14:44.128 UUID: e2bd08a5-7985-48ea-b116-67e6ac269337 00:14:44.128 Thin Provisioning: Not Supported 00:14:44.128 Per-NS Atomic Units: Yes 00:14:44.128 Atomic Boundary Size (Normal): 0 00:14:44.128 Atomic Boundary Size (PFail): 0 00:14:44.128 Atomic Boundary Offset: 0 00:14:44.128 Maximum Single Source Range Length: 65535 00:14:44.128 Maximum Copy Length: 65535 00:14:44.128 Maximum Source Range Count: 1 00:14:44.128 NGUID/EUI64 Never Reused: No 00:14:44.128 Namespace Write Protected: No 00:14:44.128 Number of LBA Formats: 1 00:14:44.128 Current LBA Format: LBA Format #00 00:14:44.128 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:44.128 00:14:44.128 07:18:52 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.128 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.128 rmmod nvme_tcp 00:14:44.128 rmmod nvme_fabrics 00:14:44.128 rmmod nvme_keyring 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74639 ']' 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74639 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74639 ']' 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74639 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74639 00:14:44.387 killing process with pid 74639 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74639' 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74639 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74639 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:44.387 ************************************ 00:14:44.387 END TEST nvmf_identify 00:14:44.387 ************************************ 00:14:44.387 00:14:44.387 real 0m2.348s 00:14:44.387 user 0m6.625s 00:14:44.387 sys 0m0.569s 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.387 07:18:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:44.675 07:18:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:44.675 07:18:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:44.675 07:18:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:44.675 07:18:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.675 07:18:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:44.675 ************************************ 00:14:44.675 START TEST nvmf_perf 00:14:44.675 ************************************ 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:44.675 * Looking for test storage... 00:14:44.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:44.675 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:44.676 Cannot find device "nvmf_tgt_br" 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.676 Cannot find device "nvmf_tgt_br2" 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:44.676 Cannot find device "nvmf_tgt_br" 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:44.676 Cannot find device "nvmf_tgt_br2" 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:44.676 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:44.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:44.934 00:14:44.934 --- 10.0.0.2 ping statistics --- 00:14:44.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.934 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:44.934 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:44.934 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:44.934 00:14:44.934 --- 10.0.0.3 ping statistics --- 00:14:44.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.934 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:44.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:44.934 00:14:44.934 --- 10.0.0.1 ping statistics --- 00:14:44.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.934 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:44.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74850 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74850 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 74850 ']' 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.934 07:18:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:44.934 [2024-07-15 07:18:53.858500] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:44.934 [2024-07-15 07:18:53.858753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.192 [2024-07-15 07:18:53.996481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.192 [2024-07-15 07:18:54.066648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.192 [2024-07-15 07:18:54.066881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.192 [2024-07-15 07:18:54.067048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.192 [2024-07-15 07:18:54.067307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.192 [2024-07-15 07:18:54.067430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.192 [2024-07-15 07:18:54.067589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.192 [2024-07-15 07:18:54.067653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.192 [2024-07-15 07:18:54.068213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.192 [2024-07-15 07:18:54.068225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.192 [2024-07-15 07:18:54.100957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:45.450 07:18:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:45.708 07:18:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:45.708 07:18:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:45.966 07:18:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:45.966 07:18:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:46.223 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:46.224 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:46.224 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:46.224 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:46.224 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:46.789 [2024-07-15 07:18:55.442025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.789 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:47.047 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:47.047 07:18:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:47.305 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:47.305 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:47.563 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.821 [2024-07-15 07:18:56.619373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.821 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.079 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:48.079 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:48.079 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:48.079 07:18:56 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:49.496 Initializing NVMe Controllers 00:14:49.496 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:49.496 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:49.496 Initialization complete. Launching workers. 00:14:49.496 ======================================================== 00:14:49.496 Latency(us) 00:14:49.496 Device Information : IOPS MiB/s Average min max 00:14:49.496 PCIE (0000:00:10.0) NSID 1 from core 0: 24158.27 94.37 1324.49 322.90 8179.18 00:14:49.496 ======================================================== 00:14:49.496 Total : 24158.27 94.37 1324.49 322.90 8179.18 00:14:49.496 00:14:49.496 07:18:58 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:50.427 Initializing NVMe Controllers 00:14:50.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:50.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:50.427 Initialization complete. Launching workers. 00:14:50.427 ======================================================== 00:14:50.427 Latency(us) 00:14:50.427 Device Information : IOPS MiB/s Average min max 00:14:50.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3360.38 13.13 297.26 106.92 4265.10 00:14:50.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.47 4971.47 12001.99 00:14:50.427 ======================================================== 00:14:50.427 Total : 3483.88 13.61 576.00 106.92 12001.99 00:14:50.427 00:14:50.685 07:18:59 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:52.057 Initializing NVMe Controllers 00:14:52.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:52.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:52.057 Initialization complete. Launching workers. 00:14:52.057 ======================================================== 00:14:52.057 Latency(us) 00:14:52.058 Device Information : IOPS MiB/s Average min max 00:14:52.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7918.47 30.93 4042.20 590.32 9669.56 00:14:52.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3909.76 15.27 8217.81 6259.64 21714.85 00:14:52.058 ======================================================== 00:14:52.058 Total : 11828.23 46.20 5422.43 590.32 21714.85 00:14:52.058 00:14:52.058 07:19:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:52.058 07:19:00 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:54.584 Initializing NVMe Controllers 00:14:54.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:54.584 Controller IO queue size 128, less than required. 00:14:54.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.584 Controller IO queue size 128, less than required. 00:14:54.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:54.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:54.584 Initialization complete. Launching workers. 00:14:54.584 ======================================================== 00:14:54.584 Latency(us) 00:14:54.584 Device Information : IOPS MiB/s Average min max 00:14:54.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.48 443.37 73227.70 46995.31 109345.66 00:14:54.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 662.62 165.65 199195.26 76986.60 303748.09 00:14:54.584 ======================================================== 00:14:54.584 Total : 2436.10 609.02 107490.88 46995.31 303748.09 00:14:54.584 00:14:54.584 07:19:03 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:54.584 Initializing NVMe Controllers 00:14:54.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:54.584 Controller IO queue size 128, less than required. 00:14:54.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.584 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:54.584 Controller IO queue size 128, less than required. 00:14:54.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:54.584 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:54.584 WARNING: Some requested NVMe devices were skipped 00:14:54.584 No valid NVMe controllers or AIO or URING devices found 00:14:54.842 07:19:03 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:57.386 Initializing NVMe Controllers 00:14:57.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.386 Controller IO queue size 128, less than required. 00:14:57.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:57.386 Controller IO queue size 128, less than required. 00:14:57.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:57.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:57.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:57.386 Initialization complete. Launching workers. 00:14:57.386 00:14:57.386 ==================== 00:14:57.386 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:57.386 TCP transport: 00:14:57.386 polls: 9580 00:14:57.386 idle_polls: 5584 00:14:57.386 sock_completions: 3996 00:14:57.386 nvme_completions: 6359 00:14:57.386 submitted_requests: 9386 00:14:57.386 queued_requests: 1 00:14:57.386 00:14:57.386 ==================== 00:14:57.386 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:57.386 TCP transport: 00:14:57.386 polls: 11957 00:14:57.386 idle_polls: 7717 00:14:57.386 sock_completions: 4240 00:14:57.386 nvme_completions: 6407 00:14:57.386 submitted_requests: 9556 00:14:57.386 queued_requests: 1 00:14:57.386 ======================================================== 00:14:57.386 Latency(us) 00:14:57.386 Device Information : IOPS MiB/s Average min max 00:14:57.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1586.28 396.57 81977.12 44110.88 133183.00 00:14:57.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1598.26 399.56 81229.29 28386.86 134005.11 00:14:57.386 ======================================================== 00:14:57.386 Total : 3184.54 796.13 81601.80 28386.86 134005.11 00:14:57.386 00:14:57.386 07:19:06 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:57.386 07:19:06 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.645 rmmod nvme_tcp 00:14:57.645 rmmod nvme_fabrics 00:14:57.645 rmmod nvme_keyring 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74850 ']' 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74850 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 74850 ']' 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 74850 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74850 00:14:57.645 killing process with pid 74850 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74850' 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 74850 00:14:57.645 07:19:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 74850 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.211 07:19:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:58.211 ************************************ 00:14:58.211 END TEST nvmf_perf 00:14:58.211 ************************************ 00:14:58.211 00:14:58.212 real 0m13.758s 00:14:58.212 user 0m51.031s 00:14:58.212 sys 0m3.852s 00:14:58.212 07:19:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.212 07:19:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:58.212 07:19:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:58.212 07:19:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:58.212 07:19:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.212 07:19:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.212 07:19:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.470 ************************************ 00:14:58.470 START TEST nvmf_fio_host 00:14:58.470 ************************************ 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:58.470 * Looking for test storage... 00:14:58.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.470 07:19:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:58.471 Cannot find device "nvmf_tgt_br" 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.471 Cannot find device "nvmf_tgt_br2" 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:58.471 Cannot find device "nvmf_tgt_br" 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:58.471 Cannot find device "nvmf_tgt_br2" 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:58.471 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:58.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:14:58.730 00:14:58.730 --- 10.0.0.2 ping statistics --- 00:14:58.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.730 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:58.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:14:58.730 00:14:58.730 --- 10.0.0.3 ping statistics --- 00:14:58.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.730 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:58.730 00:14:58.730 --- 10.0.0.1 ping statistics --- 00:14:58.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.730 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75255 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75255 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75255 ']' 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.730 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.731 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.731 07:19:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:58.731 [2024-07-15 07:19:07.678714] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:14:58.731 [2024-07-15 07:19:07.679414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.989 [2024-07-15 07:19:07.822764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.989 [2024-07-15 07:19:07.894251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.989 [2024-07-15 07:19:07.894679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.989 [2024-07-15 07:19:07.894971] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.989 [2024-07-15 07:19:07.895265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.989 [2024-07-15 07:19:07.895482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.989 [2024-07-15 07:19:07.895829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.989 [2024-07-15 07:19:07.895881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.989 [2024-07-15 07:19:07.896612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.989 [2024-07-15 07:19:07.896704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.989 [2024-07-15 07:19:07.933354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:59.949 07:19:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.949 07:19:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:59.949 07:19:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.206 [2024-07-15 07:19:08.971063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.206 07:19:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:00.206 07:19:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:00.206 07:19:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:00.206 07:19:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:00.463 Malloc1 00:15:00.463 07:19:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.720 07:19:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:00.977 07:19:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.235 [2024-07-15 07:19:10.011028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.235 07:19:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:01.492 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:01.493 07:19:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:01.751 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:01.751 fio-3.35 00:15:01.751 Starting 1 thread 00:15:04.280 00:15:04.280 test: (groupid=0, jobs=1): err= 0: pid=75338: Mon Jul 15 07:19:12 2024 00:15:04.280 read: IOPS=8794, BW=34.4MiB/s (36.0MB/s)(68.9MiB/2007msec) 00:15:04.280 slat (usec): min=2, max=364, avg= 2.57, stdev= 3.44 00:15:04.280 clat (usec): min=2634, max=13432, avg=7561.98, stdev=513.41 00:15:04.280 lat (usec): min=2692, max=13435, avg=7564.55, stdev=513.02 00:15:04.280 clat percentiles (usec): 00:15:04.281 | 1.00th=[ 6521], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7177], 00:15:04.281 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:15:04.281 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8291], 00:15:04.281 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11731], 99.95th=[12649], 00:15:04.281 | 99.99th=[13435] 00:15:04.281 bw ( KiB/s): min=34272, max=35760, per=99.98%, avg=35172.00, stdev=648.30, samples=4 00:15:04.281 iops : min= 8568, max= 8940, avg=8793.00, stdev=162.07, samples=4 00:15:04.281 write: IOPS=8802, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2007msec); 0 zone resets 00:15:04.281 slat (usec): min=2, max=240, avg= 2.73, stdev= 2.06 00:15:04.281 clat (usec): min=2433, max=13277, avg=6912.67, stdev=479.92 00:15:04.281 lat (usec): min=2447, max=13279, avg=6915.40, stdev=479.69 00:15:04.281 clat percentiles (usec): 00:15:04.281 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6587], 00:15:04.281 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:15:04.281 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7373], 95.00th=[ 7570], 00:15:04.281 | 99.00th=[ 7898], 99.50th=[ 8160], 99.90th=[11076], 99.95th=[12780], 00:15:04.281 | 99.99th=[13304] 00:15:04.281 bw ( KiB/s): min=35000, max=35424, per=100.00%, avg=35210.00, stdev=229.21, samples=4 00:15:04.281 iops : min= 8750, max= 8856, avg=8802.50, stdev=57.30, samples=4 00:15:04.281 lat (msec) : 4=0.16%, 10=99.65%, 20=0.19% 00:15:04.281 cpu : usr=67.50%, sys=23.98%, ctx=19, majf=0, minf=7 00:15:04.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:04.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.281 issued rwts: total=17651,17667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.281 00:15:04.281 Run status group 0 (all jobs): 00:15:04.281 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.3MB), run=2007-2007msec 00:15:04.281 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2007-2007msec 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:04.281 07:19:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:04.281 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:04.281 fio-3.35 00:15:04.281 Starting 1 thread 00:15:06.812 00:15:06.812 test: (groupid=0, jobs=1): err= 0: pid=75387: Mon Jul 15 07:19:15 2024 00:15:06.812 read: IOPS=7916, BW=124MiB/s (130MB/s)(248MiB/2008msec) 00:15:06.812 slat (usec): min=3, max=134, avg= 4.08, stdev= 1.90 00:15:06.812 clat (usec): min=1929, max=19257, avg=8847.94, stdev=2776.66 00:15:06.812 lat (usec): min=1933, max=19262, avg=8852.02, stdev=2776.79 00:15:06.812 clat percentiles (usec): 00:15:06.812 | 1.00th=[ 4293], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6390], 00:15:06.812 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9110], 00:15:06.812 | 70.00th=[10028], 80.00th=[11076], 90.00th=[12649], 95.00th=[14353], 00:15:06.812 | 99.00th=[16581], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:15:06.812 | 99.99th=[19268] 00:15:06.812 bw ( KiB/s): min=60736, max=76064, per=52.17%, avg=66080.00, stdev=7124.49, samples=4 00:15:06.812 iops : min= 3796, max= 4754, avg=4130.00, stdev=445.28, samples=4 00:15:06.812 write: IOPS=4662, BW=72.8MiB/s (76.4MB/s)(135MiB/1854msec); 0 zone resets 00:15:06.812 slat (usec): min=36, max=223, avg=41.30, stdev= 6.74 00:15:06.813 clat (usec): min=1620, max=23677, avg=12800.29, stdev=2513.81 00:15:06.813 lat (usec): min=1659, max=23723, avg=12841.59, stdev=2515.78 00:15:06.813 clat percentiles (usec): 00:15:06.813 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10683], 00:15:06.813 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12518], 60.00th=[12911], 00:15:06.813 | 70.00th=[13698], 80.00th=[14877], 90.00th=[16188], 95.00th=[17433], 00:15:06.813 | 99.00th=[19792], 99.50th=[20317], 99.90th=[23200], 99.95th=[23462], 00:15:06.813 | 99.99th=[23725] 00:15:06.813 bw ( KiB/s): min=61888, max=78304, per=91.84%, avg=68512.00, stdev=7321.64, samples=4 00:15:06.813 iops : min= 3868, max= 4894, avg=4282.00, stdev=457.60, samples=4 00:15:06.813 lat (msec) : 2=0.02%, 4=0.33%, 10=48.47%, 20=50.95%, 50=0.24% 00:15:06.813 cpu : usr=79.03%, sys=15.69%, ctx=5, majf=0, minf=12 00:15:06.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:06.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.813 issued rwts: total=15896,8644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.813 00:15:06.813 Run status group 0 (all jobs): 00:15:06.813 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=248MiB (260MB), run=2008-2008msec 00:15:06.813 WRITE: bw=72.8MiB/s (76.4MB/s), 72.8MiB/s-72.8MiB/s (76.4MB/s-76.4MB/s), io=135MiB (142MB), run=1854-1854msec 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.813 rmmod nvme_tcp 00:15:06.813 rmmod nvme_fabrics 00:15:06.813 rmmod nvme_keyring 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75255 ']' 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75255 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75255 ']' 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75255 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75255 00:15:06.813 killing process with pid 75255 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75255' 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75255 00:15:06.813 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75255 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:07.071 00:15:07.071 real 0m8.693s 00:15:07.071 user 0m35.889s 00:15:07.071 sys 0m2.322s 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.071 07:19:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:07.071 ************************************ 00:15:07.071 END TEST nvmf_fio_host 00:15:07.071 ************************************ 00:15:07.071 07:19:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:07.071 07:19:15 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:07.071 07:19:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:07.071 07:19:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.071 07:19:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:07.071 ************************************ 00:15:07.071 START TEST nvmf_failover 00:15:07.071 ************************************ 00:15:07.071 07:19:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:07.071 * Looking for test storage... 00:15:07.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:07.071 07:19:15 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.071 07:19:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.071 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.328 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:07.329 Cannot find device "nvmf_tgt_br" 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.329 Cannot find device "nvmf_tgt_br2" 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:07.329 Cannot find device "nvmf_tgt_br" 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:07.329 Cannot find device "nvmf_tgt_br2" 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:07.329 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:07.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:15:07.587 00:15:07.587 --- 10.0.0.2 ping statistics --- 00:15:07.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.587 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:07.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:07.587 00:15:07.587 --- 10.0.0.3 ping statistics --- 00:15:07.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.587 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:07.587 00:15:07.587 --- 10.0.0.1 ping statistics --- 00:15:07.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.587 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75603 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75603 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75603 ']' 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:07.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.587 07:19:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:07.587 [2024-07-15 07:19:16.427807] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:15:07.587 [2024-07-15 07:19:16.427913] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.845 [2024-07-15 07:19:16.570040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:07.845 [2024-07-15 07:19:16.640923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.846 [2024-07-15 07:19:16.641265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.846 [2024-07-15 07:19:16.641484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.846 [2024-07-15 07:19:16.641553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.846 [2024-07-15 07:19:16.641686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.846 [2024-07-15 07:19:16.641873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.846 [2024-07-15 07:19:16.642532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.846 [2024-07-15 07:19:16.642583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.846 [2024-07-15 07:19:16.677616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:08.781 [2024-07-15 07:19:17.700499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.781 07:19:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:09.349 Malloc0 00:15:09.349 07:19:18 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.607 07:19:18 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.866 07:19:18 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.124 [2024-07-15 07:19:18.864218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.124 07:19:18 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:10.383 [2024-07-15 07:19:19.148454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:10.383 07:19:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:10.642 [2024-07-15 07:19:19.384674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:10.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75662 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75662 /var/tmp/bdevperf.sock 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75662 ']' 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.642 07:19:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:11.579 07:19:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.579 07:19:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:11.579 07:19:20 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.838 NVMe0n1 00:15:11.838 07:19:20 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:12.406 00:15:12.406 07:19:21 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75684 00:15:12.406 07:19:21 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:12.406 07:19:21 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:13.341 07:19:22 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.600 07:19:22 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:16.881 07:19:25 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:16.881 00:15:16.881 07:19:25 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:17.139 07:19:26 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:20.425 07:19:29 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.425 [2024-07-15 07:19:29.315446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.425 07:19:29 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:21.800 07:19:30 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:21.800 07:19:30 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75684 00:15:28.365 0 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75662 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75662 ']' 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75662 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75662 00:15:28.365 killing process with pid 75662 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75662' 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75662 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75662 00:15:28.365 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:28.365 [2024-07-15 07:19:19.456515] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:15:28.365 [2024-07-15 07:19:19.456632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:15:28.365 [2024-07-15 07:19:19.597814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.365 [2024-07-15 07:19:19.668179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.365 [2024-07-15 07:19:19.701493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:28.365 Running I/O for 15 seconds... 00:15:28.365 [2024-07-15 07:19:22.402646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.365 [2024-07-15 07:19:22.402724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.365 [2024-07-15 07:19:22.402757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.365 [2024-07-15 07:19:22.402774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.365 [2024-07-15 07:19:22.402790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.402804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.402820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.402833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.402848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.402861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.402876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.402889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.402904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.402918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.402933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.402946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.402960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.402974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.402989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.403972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.403985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.404000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.404013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.366 [2024-07-15 07:19:22.404028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.366 [2024-07-15 07:19:22.404041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.404975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.404989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.367 [2024-07-15 07:19:22.405244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.367 [2024-07-15 07:19:22.405259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.405976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.405991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.406004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.406033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.406061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.368 [2024-07-15 07:19:22.406105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.368 [2024-07-15 07:19:22.406505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.368 [2024-07-15 07:19:22.406525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:22.406554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:22.406584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf57c0 is same with the state(5) to be set 00:15:28.369 [2024-07-15 07:19:22.406616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.369 [2024-07-15 07:19:22.406626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.369 [2024-07-15 07:19:22.406637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63640 len:8 PRP1 0x0 PRP2 0x0 00:15:28.369 [2024-07-15 07:19:22.406652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406700] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cf57c0 was disconnected and freed. reset controller. 00:15:28.369 [2024-07-15 07:19:22.406718] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:28.369 [2024-07-15 07:19:22.406772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.369 [2024-07-15 07:19:22.406793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.369 [2024-07-15 07:19:22.406821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.369 [2024-07-15 07:19:22.406848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.369 [2024-07-15 07:19:22.406875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:22.406888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:28.369 [2024-07-15 07:19:22.406932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4570 (9): Bad file descriptor 00:15:28.369 [2024-07-15 07:19:22.410857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:28.369 [2024-07-15 07:19:22.450421] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:28.369 [2024-07-15 07:19:26.031566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.031900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.031929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.031957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.031972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.031985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.369 [2024-07-15 07:19:26.032396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.032425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.032454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.032490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.032520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.032548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.369 [2024-07-15 07:19:26.032577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.369 [2024-07-15 07:19:26.032592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.032605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.032633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.032980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.032994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.033022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.033050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.033092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.370 [2024-07-15 07:19:26.033122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.370 [2024-07-15 07:19:26.033582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.370 [2024-07-15 07:19:26.033597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.033610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.033978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.033991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.371 [2024-07-15 07:19:26.034570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.371 [2024-07-15 07:19:26.034838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.371 [2024-07-15 07:19:26.034853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.034867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.034883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.034896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.034911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.034925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.034939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.034953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.034968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.034981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.034996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:26.035292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:26.035493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d26d30 is same with the state(5) to be set 00:15:28.372 [2024-07-15 07:19:26.035530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.372 [2024-07-15 07:19:26.035540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.372 [2024-07-15 07:19:26.035551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66112 len:8 PRP1 0x0 PRP2 0x0 00:15:28.372 [2024-07-15 07:19:26.035565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035611] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d26d30 was disconnected and freed. reset controller. 00:15:28.372 [2024-07-15 07:19:26.035629] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:28.372 [2024-07-15 07:19:26.035686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:26.035707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:26.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:26.035763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:26.035790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:26.035803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:28.372 [2024-07-15 07:19:26.039761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:28.372 [2024-07-15 07:19:26.039802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4570 (9): Bad file descriptor 00:15:28.372 [2024-07-15 07:19:26.074168] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:28.372 [2024-07-15 07:19:30.634467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:30.634538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.634559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:30.634573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.634587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:30.634600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.634613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.372 [2024-07-15 07:19:30.634626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.634640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4570 is same with the state(5) to be set 00:15:28.372 [2024-07-15 07:19:30.635736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.372 [2024-07-15 07:19:30.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.635803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:30.635820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.635836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:30.635850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.635865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:30.635879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.635894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:30.635908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.635923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:30.635936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.635951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:30.635964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.372 [2024-07-15 07:19:30.635979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.372 [2024-07-15 07:19:30.635992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.636752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.636973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.373 [2024-07-15 07:19:30.636988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.373 [2024-07-15 07:19:30.637364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.373 [2024-07-15 07:19:30.637394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.374 [2024-07-15 07:19:30.637796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.374 [2024-07-15 07:19:30.637824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.374 [2024-07-15 07:19:30.637853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.374 [2024-07-15 07:19:30.637881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.374 [2024-07-15 07:19:30.637909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.374 [2024-07-15 07:19:30.637949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.374 [2024-07-15 07:19:30.637981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.637997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.374 [2024-07-15 07:19:30.638021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.374 [2024-07-15 07:19:30.638038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.375 [2024-07-15 07:19:30.638871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.375 [2024-07-15 07:19:30.638890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.638905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.638918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.638934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.638947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.638962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.638977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.638992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:28.376 [2024-07-15 07:19:30.639548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:28.376 [2024-07-15 07:19:30.639766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25dd0 is same with the state(5) to be set 00:15:28.376 [2024-07-15 07:19:30.639797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.639808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.639818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2160 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.639831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.639855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.639865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2504 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.639878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.639901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.639911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2512 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.639924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.639947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.639957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2520 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.639969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.639983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.639992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.640036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.640046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2536 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.640101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.640112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2544 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.640148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.640157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2552 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.640194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.640203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.640239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.640248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2568 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.640284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.640294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2576 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.376 [2024-07-15 07:19:30.640332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.376 [2024-07-15 07:19:30.640341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.376 [2024-07-15 07:19:30.640351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2584 len:8 PRP1 0x0 PRP2 0x0 00:15:28.376 [2024-07-15 07:19:30.640364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2600 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2608 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2616 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2632 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2640 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.640903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2648 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.640927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.640967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.640987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.641003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.641025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.641050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:28.377 [2024-07-15 07:19:30.641069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:28.377 [2024-07-15 07:19:30.641105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2664 len:8 PRP1 0x0 PRP2 0x0 00:15:28.377 [2024-07-15 07:19:30.641126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.377 [2024-07-15 07:19:30.641201] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d25dd0 was disconnected and freed. reset controller. 00:15:28.377 [2024-07-15 07:19:30.641232] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:28.377 [2024-07-15 07:19:30.641255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:28.377 [2024-07-15 07:19:30.645699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:28.377 [2024-07-15 07:19:30.645778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4570 (9): Bad file descriptor 00:15:28.377 [2024-07-15 07:19:30.685635] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:28.377 00:15:28.377 Latency(us) 00:15:28.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.377 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:28.377 Verification LBA range: start 0x0 length 0x4000 00:15:28.377 NVMe0n1 : 15.01 8662.20 33.84 217.80 0.00 14380.19 655.36 16443.58 00:15:28.377 =================================================================================================================== 00:15:28.377 Total : 8662.20 33.84 217.80 0.00 14380.19 655.36 16443.58 00:15:28.377 Received shutdown signal, test time was about 15.000000 seconds 00:15:28.377 00:15:28.377 Latency(us) 00:15:28.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.377 =================================================================================================================== 00:15:28.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:28.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75864 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75864 /var/tmp/bdevperf.sock 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75864 ']' 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:28.377 07:19:36 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:28.377 [2024-07-15 07:19:37.045505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:28.377 07:19:37 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:28.635 [2024-07-15 07:19:37.329729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:28.635 07:19:37 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.893 NVMe0n1 00:15:28.893 07:19:37 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.151 00:15:29.151 07:19:37 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.408 00:15:29.408 07:19:38 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.408 07:19:38 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:29.666 07:19:38 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.924 07:19:38 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:33.207 07:19:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:33.207 07:19:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:33.207 07:19:42 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75928 00:15:33.207 07:19:42 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:33.207 07:19:42 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 75928 00:15:34.582 0 00:15:34.582 07:19:43 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:34.582 [2024-07-15 07:19:36.515578] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:15:34.582 [2024-07-15 07:19:36.515738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75864 ] 00:15:34.582 [2024-07-15 07:19:36.650514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.582 [2024-07-15 07:19:36.709501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.582 [2024-07-15 07:19:36.739363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:34.582 [2024-07-15 07:19:38.813624] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:34.582 [2024-07-15 07:19:38.813731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.582 [2024-07-15 07:19:38.813756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.582 [2024-07-15 07:19:38.813774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.582 [2024-07-15 07:19:38.813787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.582 [2024-07-15 07:19:38.813801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.582 [2024-07-15 07:19:38.813814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.582 [2024-07-15 07:19:38.813827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.582 [2024-07-15 07:19:38.813840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.582 [2024-07-15 07:19:38.813854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:34.582 [2024-07-15 07:19:38.813903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:34.582 [2024-07-15 07:19:38.813947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231d570 (9): Bad file descriptor 00:15:34.582 [2024-07-15 07:19:38.818176] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:34.582 Running I/O for 1 seconds... 00:15:34.582 00:15:34.582 Latency(us) 00:15:34.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.583 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:34.583 Verification LBA range: start 0x0 length 0x4000 00:15:34.583 NVMe0n1 : 1.01 6766.70 26.43 0.00 0.00 18838.43 2412.92 15728.64 00:15:34.583 =================================================================================================================== 00:15:34.583 Total : 6766.70 26.43 0.00 0.00 18838.43 2412.92 15728.64 00:15:34.583 07:19:43 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:34.583 07:19:43 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.841 07:19:43 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:35.099 07:19:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:35.099 07:19:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:35.358 07:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:35.616 07:19:44 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75864 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75864 ']' 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75864 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75864 00:15:38.899 killing process with pid 75864 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75864' 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75864 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75864 00:15:38.899 07:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:39.157 07:19:47 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.416 rmmod nvme_tcp 00:15:39.416 rmmod nvme_fabrics 00:15:39.416 rmmod nvme_keyring 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75603 ']' 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75603 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75603 ']' 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75603 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75603 00:15:39.416 killing process with pid 75603 00:15:39.416 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:39.417 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:39.417 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75603' 00:15:39.417 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75603 00:15:39.417 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75603 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:39.675 ************************************ 00:15:39.675 END TEST nvmf_failover 00:15:39.675 ************************************ 00:15:39.675 00:15:39.675 real 0m32.560s 00:15:39.675 user 2m6.469s 00:15:39.675 sys 0m5.596s 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.675 07:19:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.675 07:19:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:39.675 07:19:48 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:39.675 07:19:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:39.675 07:19:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.675 07:19:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:39.675 ************************************ 00:15:39.675 START TEST nvmf_host_discovery 00:15:39.675 ************************************ 00:15:39.675 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:39.675 * Looking for test storage... 00:15:39.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.675 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.675 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.935 07:19:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:39.936 Cannot find device "nvmf_tgt_br" 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.936 Cannot find device "nvmf_tgt_br2" 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:39.936 Cannot find device "nvmf_tgt_br" 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:39.936 Cannot find device "nvmf_tgt_br2" 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.936 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:40.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:15:40.195 00:15:40.195 --- 10.0.0.2 ping statistics --- 00:15:40.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.195 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:40.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:40.195 00:15:40.195 --- 10.0.0.3 ping statistics --- 00:15:40.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.195 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:40.195 00:15:40.195 --- 10.0.0.1 ping statistics --- 00:15:40.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.195 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76195 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76195 00:15:40.195 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76195 ']' 00:15:40.196 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.196 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.196 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.196 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.196 07:19:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.196 [2024-07-15 07:19:49.038376] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:15:40.196 [2024-07-15 07:19:49.038485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.454 [2024-07-15 07:19:49.177187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.454 [2024-07-15 07:19:49.246203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.454 [2024-07-15 07:19:49.246264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.454 [2024-07-15 07:19:49.246290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.454 [2024-07-15 07:19:49.246300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.454 [2024-07-15 07:19:49.246309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.454 [2024-07-15 07:19:49.246350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.454 [2024-07-15 07:19:49.280267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:41.387 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.387 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:41.387 07:19:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.387 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 [2024-07-15 07:19:50.097285] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 [2024-07-15 07:19:50.105323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 null0 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 null1 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76227 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76227 /tmp/host.sock 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76227 ']' 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.388 07:19:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 [2024-07-15 07:19:50.186712] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:15:41.388 [2024-07-15 07:19:50.187265] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76227 ] 00:15:41.388 [2024-07-15 07:19:50.328831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.647 [2024-07-15 07:19:50.417197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.647 [2024-07-15 07:19:50.450836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.583 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.584 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 [2024-07-15 07:19:51.613756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.844 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:15:43.114 07:19:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:15:43.373 [2024-07-15 07:19:52.230326] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:43.373 [2024-07-15 07:19:52.230365] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:43.373 [2024-07-15 07:19:52.230385] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:43.373 [2024-07-15 07:19:52.236405] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:43.373 [2024-07-15 07:19:52.294002] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:43.373 [2024-07-15 07:19:52.294038] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:43.940 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:44.199 07:19:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.199 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.200 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.494 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.495 [2024-07-15 07:19:53.211652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:44.495 [2024-07-15 07:19:53.212032] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:44.495 [2024-07-15 07:19:53.212066] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:44.495 [2024-07-15 07:19:53.218024] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.495 [2024-07-15 07:19:53.276305] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:44.495 [2024-07-15 07:19:53.276334] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:44.495 [2024-07-15 07:19:53.276342] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.495 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.754 [2024-07-15 07:19:53.440476] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:44.754 [2024-07-15 07:19:53.440542] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:44.754 [2024-07-15 07:19:53.441371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.754 [2024-07-15 07:19:53.441421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.754 [2024-07-15 07:19:53.441436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.754 [2024-07-15 07:19:53.441445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.754 [2024-07-15 07:19:53.441455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.754 [2024-07-15 07:19:53.441464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.754 [2024-07-15 07:19:53.441474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.754 [2024-07-15 07:19:53.441483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.754 [2024-07-15 07:19:53.441492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1743600 is same with the state(5) to be set 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:44.754 [2024-07-15 07:19:53.446581] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:44.754 [2024-07-15 07:19:53.446615] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:44.754 [2024-07-15 07:19:53.446678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1743600 (9): Bad file descriptor 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.754 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.755 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:45.013 07:19:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:45.014 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.014 07:19:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.951 [2024-07-15 07:19:54.861212] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:45.951 [2024-07-15 07:19:54.861252] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:45.951 [2024-07-15 07:19:54.861287] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:45.951 [2024-07-15 07:19:54.867246] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:46.210 [2024-07-15 07:19:54.927539] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:46.210 [2024-07-15 07:19:54.927615] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.210 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.210 request: 00:15:46.210 { 00:15:46.210 "name": "nvme", 00:15:46.210 "trtype": "tcp", 00:15:46.210 "traddr": "10.0.0.2", 00:15:46.210 "adrfam": "ipv4", 00:15:46.210 "trsvcid": "8009", 00:15:46.210 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:46.210 "wait_for_attach": true, 00:15:46.210 "method": "bdev_nvme_start_discovery", 00:15:46.210 "req_id": 1 00:15:46.210 } 00:15:46.210 Got JSON-RPC error response 00:15:46.210 response: 00:15:46.210 { 00:15:46.210 "code": -17, 00:15:46.210 "message": "File exists" 00:15:46.210 } 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.211 07:19:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.211 request: 00:15:46.211 { 00:15:46.211 "name": "nvme_second", 00:15:46.211 "trtype": "tcp", 00:15:46.211 "traddr": "10.0.0.2", 00:15:46.211 "adrfam": "ipv4", 00:15:46.211 "trsvcid": "8009", 00:15:46.211 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:46.211 "wait_for_attach": true, 00:15:46.211 "method": "bdev_nvme_start_discovery", 00:15:46.211 "req_id": 1 00:15:46.211 } 00:15:46.211 Got JSON-RPC error response 00:15:46.211 response: 00:15:46.211 { 00:15:46.211 "code": -17, 00:15:46.211 "message": "File exists" 00:15:46.211 } 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:46.211 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.470 07:19:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.406 [2024-07-15 07:19:56.200619] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:47.406 [2024-07-15 07:19:56.200722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175cf20 with addr=10.0.0.2, port=8010 00:15:47.406 [2024-07-15 07:19:56.200759] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:47.406 [2024-07-15 07:19:56.200770] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:47.406 [2024-07-15 07:19:56.200779] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:48.341 [2024-07-15 07:19:57.200620] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:48.341 [2024-07-15 07:19:57.200725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175cf20 with addr=10.0.0.2, port=8010 00:15:48.341 [2024-07-15 07:19:57.200747] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:48.341 [2024-07-15 07:19:57.200757] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:48.341 [2024-07-15 07:19:57.200767] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:49.274 [2024-07-15 07:19:58.200466] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:49.274 request: 00:15:49.274 { 00:15:49.274 "name": "nvme_second", 00:15:49.274 "trtype": "tcp", 00:15:49.274 "traddr": "10.0.0.2", 00:15:49.274 "adrfam": "ipv4", 00:15:49.274 "trsvcid": "8010", 00:15:49.274 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:49.274 "wait_for_attach": false, 00:15:49.274 "attach_timeout_ms": 3000, 00:15:49.274 "method": "bdev_nvme_start_discovery", 00:15:49.274 "req_id": 1 00:15:49.274 } 00:15:49.274 Got JSON-RPC error response 00:15:49.274 response: 00:15:49.274 { 00:15:49.274 "code": -110, 00:15:49.274 "message": "Connection timed out" 00:15:49.274 } 00:15:49.274 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:49.274 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.275 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76227 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.532 rmmod nvme_tcp 00:15:49.532 rmmod nvme_fabrics 00:15:49.532 rmmod nvme_keyring 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76195 ']' 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76195 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76195 ']' 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76195 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76195 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:49.532 killing process with pid 76195 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76195' 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76195 00:15:49.532 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76195 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:49.790 00:15:49.790 real 0m10.050s 00:15:49.790 user 0m19.653s 00:15:49.790 sys 0m1.810s 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:49.790 ************************************ 00:15:49.790 END TEST nvmf_host_discovery 00:15:49.790 ************************************ 00:15:49.790 07:19:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:49.790 07:19:58 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:49.790 07:19:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:49.790 07:19:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.790 07:19:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.790 ************************************ 00:15:49.790 START TEST nvmf_host_multipath_status 00:15:49.790 ************************************ 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:49.790 * Looking for test storage... 00:15:49.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.790 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:50.048 Cannot find device "nvmf_tgt_br" 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.048 Cannot find device "nvmf_tgt_br2" 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:50.048 Cannot find device "nvmf_tgt_br" 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:50.048 Cannot find device "nvmf_tgt_br2" 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:50.048 07:19:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:50.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:50.307 00:15:50.307 --- 10.0.0.2 ping statistics --- 00:15:50.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.307 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:50.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:50.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:50.307 00:15:50.307 --- 10.0.0.3 ping statistics --- 00:15:50.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.307 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:50.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:50.307 00:15:50.307 --- 10.0.0.1 ping statistics --- 00:15:50.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.307 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76687 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76687 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76687 ']' 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.307 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:50.307 [2024-07-15 07:19:59.190236] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:15:50.307 [2024-07-15 07:19:59.190327] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.565 [2024-07-15 07:19:59.326670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:50.565 [2024-07-15 07:19:59.396476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.565 [2024-07-15 07:19:59.396529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.565 [2024-07-15 07:19:59.396543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.565 [2024-07-15 07:19:59.396553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.565 [2024-07-15 07:19:59.396561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.565 [2024-07-15 07:19:59.396724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.565 [2024-07-15 07:19:59.396738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.565 [2024-07-15 07:19:59.429039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:50.565 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.566 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:50.566 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.566 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.566 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:50.829 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.829 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76687 00:15:50.829 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:50.829 [2024-07-15 07:19:59.773691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.090 07:19:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:51.351 Malloc0 00:15:51.351 07:20:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:51.610 07:20:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.869 07:20:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.127 [2024-07-15 07:20:00.891904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.127 07:20:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:52.385 [2024-07-15 07:20:01.124053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76738 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76738 /var/tmp/bdevperf.sock 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76738 ']' 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.385 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.386 07:20:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:53.319 07:20:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.319 07:20:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:53.319 07:20:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:53.577 07:20:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:54.144 Nvme0n1 00:15:54.144 07:20:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:54.403 Nvme0n1 00:15:54.403 07:20:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:54.403 07:20:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:56.315 07:20:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:56.315 07:20:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:56.574 07:20:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:56.832 07:20:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:58.207 07:20:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:58.207 07:20:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:58.207 07:20:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.207 07:20:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:58.207 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.207 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:58.207 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:58.207 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.465 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:58.465 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:58.466 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.466 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:58.724 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.724 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.724 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.724 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.982 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.982 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.982 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.982 07:20:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:59.240 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.240 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:59.240 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.240 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:59.499 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.499 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:59.499 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:00.178 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:00.178 07:20:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:01.129 07:20:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:01.129 07:20:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:01.129 07:20:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.129 07:20:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:01.387 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.387 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:01.387 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.387 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.646 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.646 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.646 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.646 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:02.213 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.213 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:02.213 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:02.213 07:20:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.472 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.472 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:02.472 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:02.472 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.730 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.730 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:02.730 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.730 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:02.989 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.990 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:02.990 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:02.990 07:20:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:03.249 07:20:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.624 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.881 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.881 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.881 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.881 07:20:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.137 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.137 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:05.137 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.137 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:05.394 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.394 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:05.394 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.394 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.652 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.652 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:05.652 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.652 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:06.269 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.269 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:06.269 07:20:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:06.269 07:20:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:06.541 07:20:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:07.488 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:07.488 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:07.488 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.488 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.748 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.748 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:07.748 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.748 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:08.313 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.313 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:08.313 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:08.313 07:20:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.571 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.571 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:08.571 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.571 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:08.829 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.829 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:08.829 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.829 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:09.087 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.087 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:09.087 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.087 07:20:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:09.352 07:20:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:09.352 07:20:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:09.352 07:20:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:09.614 07:20:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:09.871 07:20:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:10.805 07:20:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:10.805 07:20:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:10.805 07:20:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.805 07:20:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.372 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.630 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.630 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.630 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.630 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.888 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.888 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:11.889 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.889 07:20:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.455 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:12.455 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:12.455 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:12.455 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.773 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:12.773 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:12.773 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:12.773 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:13.047 07:20:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:14.424 07:20:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:14.425 07:20:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:14.425 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.425 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:14.425 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:14.425 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:14.425 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.425 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.685 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.685 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.685 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.685 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.941 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.941 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.942 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.942 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:15.198 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.198 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:15.198 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.198 07:20:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:15.455 07:20:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:15.455 07:20:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:15.455 07:20:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.455 07:20:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.712 07:20:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.712 07:20:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:16.275 07:20:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:16.275 07:20:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:16.532 07:20:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:16.789 07:20:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:17.719 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:17.719 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:17.719 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.719 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:18.282 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.282 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:18.282 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:18.282 07:20:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.541 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.541 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:18.541 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:18.541 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.799 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.799 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:18.799 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.799 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:19.057 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.057 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:19.057 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:19.057 07:20:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.342 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.342 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:19.342 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:19.342 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.600 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.600 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:19.600 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:19.858 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:20.116 07:20:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:21.048 07:20:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:21.048 07:20:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:21.048 07:20:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.048 07:20:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:21.306 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:21.306 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:21.306 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.306 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:21.564 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.564 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:21.564 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.564 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:21.821 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.821 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:21.821 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.821 07:20:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:22.078 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.078 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:22.078 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.078 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:22.335 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.335 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:22.335 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.335 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:22.900 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.900 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:22.900 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:23.159 07:20:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:23.159 07:20:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:24.532 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.789 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.789 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:24.789 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:24.789 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.060 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.060 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:25.060 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.060 07:20:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:25.342 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.342 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:25.342 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.342 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:25.907 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.907 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:25.907 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.907 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:26.165 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.165 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:26.165 07:20:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:26.422 07:20:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:26.681 07:20:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:27.614 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:27.614 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:27.614 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:27.614 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.872 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.872 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:27.872 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.872 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:28.130 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:28.130 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:28.130 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.130 07:20:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:28.388 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.388 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:28.388 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.388 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:28.647 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.647 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:28.647 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.647 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:28.906 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.906 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:28.906 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.906 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76738 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76738 ']' 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76738 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76738 00:16:29.165 killing process with pid 76738 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76738' 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76738 00:16:29.165 07:20:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76738 00:16:29.165 Connection closed with partial response: 00:16:29.165 00:16:29.165 00:16:29.433 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76738 00:16:29.433 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:29.433 [2024-07-15 07:20:01.200686] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:16:29.433 [2024-07-15 07:20:01.200808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76738 ] 00:16:29.433 [2024-07-15 07:20:01.340587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.433 [2024-07-15 07:20:01.398102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.433 [2024-07-15 07:20:01.426464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:29.433 Running I/O for 90 seconds... 00:16:29.433 [2024-07-15 07:20:18.448735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.433 [2024-07-15 07:20:18.448823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:29.433 [2024-07-15 07:20:18.448864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.433 [2024-07-15 07:20:18.448883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:29.433 [2024-07-15 07:20:18.448907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.433 [2024-07-15 07:20:18.448923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:29.433 [2024-07-15 07:20:18.448946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.433 [2024-07-15 07:20:18.448962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.448985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.449858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.449960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.449977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.434 [2024-07-15 07:20:18.450560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.434 [2024-07-15 07:20:18.450774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.434 [2024-07-15 07:20:18.450797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.450821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.450846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.450875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.450951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.450980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.451955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.435 [2024-07-15 07:20:18.451972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.452965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.452992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.453026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.453052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.453108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.453136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.453160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.453177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.453200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.453235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.453262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.453279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.453303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.453319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:29.435 [2024-07-15 07:20:18.453341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.435 [2024-07-15 07:20:18.453357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.453411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.453952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.453982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.454359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.454965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.454982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.436 [2024-07-15 07:20:18.456474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.456952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.436 [2024-07-15 07:20:18.456979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:29.436 [2024-07-15 07:20:18.457007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.457592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.457944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.457983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.437 [2024-07-15 07:20:18.458721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.458956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.458993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.459021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:29.437 [2024-07-15 07:20:18.459057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.437 [2024-07-15 07:20:18.459106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.459147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.438 [2024-07-15 07:20:18.459175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.459658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.438 [2024-07-15 07:20:18.459696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.459751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.459782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.459824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.459873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.459915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.459944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.459980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.460960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.460987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.461029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.461056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.461118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.461149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.461201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.461230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:29.438 [2024-07-15 07:20:18.461289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.438 [2024-07-15 07:20:18.461317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.461980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.461996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.462653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.462977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.462996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.439 [2024-07-15 07:20:18.463520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.439 [2024-07-15 07:20:18.463573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:29.439 [2024-07-15 07:20:18.463596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.463612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.463635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.463659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.463707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.463735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.463785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.463813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.463863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.463890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.463934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.463962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.463997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.464572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.464960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.464983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.440 [2024-07-15 07:20:18.465909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.465978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.465997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:29.440 [2024-07-15 07:20:18.466021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.440 [2024-07-15 07:20:18.466037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.466095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.466162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.466241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.466319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.466421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.466950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:18.466977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.467015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.467045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.467111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.467142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.467189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.467217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.467264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.467305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.467358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.467386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.467437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.467465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:18.468186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:18.468218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.378903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.378963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.378979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.379001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.441 [2024-07-15 07:20:35.379016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:29.441 [2024-07-15 07:20:35.379048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.441 [2024-07-15 07:20:35.379064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.379742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.379961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.379977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.380000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.380023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.380047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.380062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.380101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.380118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.381575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.381634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.381674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.381714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.381753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.442 [2024-07-15 07:20:35.381791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.381830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:29.442 [2024-07-15 07:20:35.381854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.442 [2024-07-15 07:20:35.381869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.381892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.381907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.381930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.381945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.381982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.381999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.382864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.382963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.382986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.383010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.383049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.383064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.384666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.384712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.384751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.384790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.384829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.384867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.384906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.384944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.384966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.384982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.385004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.443 [2024-07-15 07:20:35.385039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.385063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.385093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.385119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.385135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:29.443 [2024-07-15 07:20:35.385157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.443 [2024-07-15 07:20:35.385173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.385904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.385965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.385981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.386003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.386019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.386049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.386066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.386102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.386119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.386150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.386166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.386190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.386206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.389301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.389340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.389380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.389609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.444 [2024-07-15 07:20:35.389804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:29.444 [2024-07-15 07:20:35.389827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.444 [2024-07-15 07:20:35.389843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.389866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.389881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.389905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.389921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.389944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.389960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.389982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.390959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.390982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.390998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.391030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.391047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.391081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.391099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.391123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.391139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.391161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.391177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.391199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.391215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.391238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.391254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.391277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.391294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.393774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.393803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.393832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.393850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.393873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.393890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.393913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.393929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.393951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.445 [2024-07-15 07:20:35.393967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.394002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.394020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:29.445 [2024-07-15 07:20:35.394043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.445 [2024-07-15 07:20:35.394059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.394925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.394964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.394987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.395051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.395260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.395298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.446 [2024-07-15 07:20:35.395452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:29.446 [2024-07-15 07:20:35.395475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.446 [2024-07-15 07:20:35.395491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.395522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.395539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.395561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.395577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.395600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.395616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.395638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.395654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.395676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.395692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.395715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.395731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.395754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.395770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.397657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.397703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.397742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.397789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.397828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.397880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.397919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.397957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.397980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.397996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.447 [2024-07-15 07:20:35.398819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.447 [2024-07-15 07:20:35.398908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:29.447 [2024-07-15 07:20:35.398931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.448 [2024-07-15 07:20:35.398947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.398970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.448 [2024-07-15 07:20:35.398986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.448 [2024-07-15 07:20:35.400303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.448 [2024-07-15 07:20:35.400367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.448 [2024-07-15 07:20:35.400407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.448 [2024-07-15 07:20:35.400446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.448 [2024-07-15 07:20:35.400494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.448 [2024-07-15 07:20:35.400533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.448 [2024-07-15 07:20:35.400572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:29.448 [2024-07-15 07:20:35.400595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.448 [2024-07-15 07:20:35.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:29.448 Received shutdown signal, test time was about 34.672031 seconds 00:16:29.448 00:16:29.448 Latency(us) 00:16:29.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.448 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:29.448 Verification LBA range: start 0x0 length 0x4000 00:16:29.448 Nvme0n1 : 34.67 8568.56 33.47 0.00 0.00 14905.51 502.69 4057035.87 00:16:29.448 =================================================================================================================== 00:16:29.448 Total : 8568.56 33.47 0.00 0.00 14905.51 502.69 4057035.87 00:16:29.448 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.706 rmmod nvme_tcp 00:16:29.706 rmmod nvme_fabrics 00:16:29.706 rmmod nvme_keyring 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76687 ']' 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76687 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76687 ']' 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76687 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76687 00:16:29.706 killing process with pid 76687 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76687' 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76687 00:16:29.706 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76687 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:29.965 00:16:29.965 real 0m40.167s 00:16:29.965 user 2m11.201s 00:16:29.965 sys 0m11.723s 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:29.965 07:20:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 ************************************ 00:16:29.965 END TEST nvmf_host_multipath_status 00:16:29.965 ************************************ 00:16:29.965 07:20:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:29.965 07:20:38 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:29.965 07:20:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:29.965 07:20:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.965 07:20:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 ************************************ 00:16:29.965 START TEST nvmf_discovery_remove_ifc 00:16:29.965 ************************************ 00:16:29.965 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:30.224 * Looking for test storage... 00:16:30.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.224 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:30.225 Cannot find device "nvmf_tgt_br" 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:30.225 07:20:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.225 Cannot find device "nvmf_tgt_br2" 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:30.225 Cannot find device "nvmf_tgt_br" 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:30.225 Cannot find device "nvmf_tgt_br2" 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.225 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:30.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:30.484 00:16:30.484 --- 10.0.0.2 ping statistics --- 00:16:30.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.484 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:30.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:30.484 00:16:30.484 --- 10.0.0.3 ping statistics --- 00:16:30.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.484 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:30.484 00:16:30.484 --- 10.0.0.1 ping statistics --- 00:16:30.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.484 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.484 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77558 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77558 00:16:30.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77558 ']' 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.485 07:20:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.485 [2024-07-15 07:20:39.359297] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:16:30.485 [2024-07-15 07:20:39.359374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.743 [2024-07-15 07:20:39.496137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.743 [2024-07-15 07:20:39.566506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.743 [2024-07-15 07:20:39.566573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.743 [2024-07-15 07:20:39.566586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.743 [2024-07-15 07:20:39.566596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.743 [2024-07-15 07:20:39.566605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.743 [2024-07-15 07:20:39.566634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.743 [2024-07-15 07:20:39.600382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.679 [2024-07-15 07:20:40.333928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.679 [2024-07-15 07:20:40.345937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:31.679 null0 00:16:31.679 [2024-07-15 07:20:40.382133] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.679 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77587 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77587 /tmp/host.sock 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77587 ']' 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.679 07:20:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.679 [2024-07-15 07:20:40.454457] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:16:31.679 [2024-07-15 07:20:40.454528] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77587 ] 00:16:31.679 [2024-07-15 07:20:40.591501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.963 [2024-07-15 07:20:40.662200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.530 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.530 [2024-07-15 07:20:41.483050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:32.788 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.788 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:32.788 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.788 07:20:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.723 [2024-07-15 07:20:42.524908] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:33.723 [2024-07-15 07:20:42.524955] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:33.723 [2024-07-15 07:20:42.524988] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:33.723 [2024-07-15 07:20:42.530975] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:33.723 [2024-07-15 07:20:42.588076] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:33.723 [2024-07-15 07:20:42.588187] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:33.723 [2024-07-15 07:20:42.588231] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:33.723 [2024-07-15 07:20:42.588263] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:33.723 [2024-07-15 07:20:42.588299] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.723 [2024-07-15 07:20:42.593528] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbebde0 was disconnected and freed. delete nvme_qpair. 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.723 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.983 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.983 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.983 07:20:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:34.918 07:20:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:35.854 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:35.854 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:35.854 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:35.854 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:35.854 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:35.854 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.854 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:36.113 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.113 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:36.113 07:20:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.046 07:20:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:38.030 07:20:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.414 07:20:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.414 07:20:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.414 07:20:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.414 07:20:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.414 07:20:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.414 07:20:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:39.414 07:20:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.414 07:20:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.414 [2024-07-15 07:20:48.015967] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:39.414 [2024-07-15 07:20:48.016036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.414 [2024-07-15 07:20:48.016055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.414 [2024-07-15 07:20:48.016069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.414 [2024-07-15 07:20:48.016094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.414 [2024-07-15 07:20:48.016105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.414 [2024-07-15 07:20:48.016114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.414 [2024-07-15 07:20:48.016130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.414 [2024-07-15 07:20:48.016139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.414 [2024-07-15 07:20:48.016150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.414 [2024-07-15 07:20:48.016159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.414 [2024-07-15 07:20:48.016169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb51ac0 is same with the state(5) to be set 00:16:39.414 [2024-07-15 07:20:48.025961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb51ac0 (9): Bad file descriptor 00:16:39.414 07:20:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:39.414 07:20:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.414 [2024-07-15 07:20:48.035988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:40.349 [2024-07-15 07:20:49.083194] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:40.349 [2024-07-15 07:20:49.083576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb51ac0 with addr=10.0.0.2, port=4420 00:16:40.349 [2024-07-15 07:20:49.083622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb51ac0 is same with the state(5) to be set 00:16:40.349 [2024-07-15 07:20:49.083686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb51ac0 (9): Bad file descriptor 00:16:40.349 [2024-07-15 07:20:49.084470] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:40.349 [2024-07-15 07:20:49.084515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:40.349 [2024-07-15 07:20:49.084534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:40.349 [2024-07-15 07:20:49.084552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:40.349 [2024-07-15 07:20:49.084591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:40.349 [2024-07-15 07:20:49.084612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:40.349 07:20:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:41.285 [2024-07-15 07:20:50.084668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:41.285 [2024-07-15 07:20:50.084728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:41.285 [2024-07-15 07:20:50.084741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:41.285 [2024-07-15 07:20:50.084751] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:41.285 [2024-07-15 07:20:50.084777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:41.285 [2024-07-15 07:20:50.084808] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:41.285 [2024-07-15 07:20:50.084863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.285 [2024-07-15 07:20:50.084879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.285 [2024-07-15 07:20:50.084894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.285 [2024-07-15 07:20:50.084904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.285 [2024-07-15 07:20:50.084914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.285 [2024-07-15 07:20:50.084924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.285 [2024-07-15 07:20:50.084934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.285 [2024-07-15 07:20:50.084944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.285 [2024-07-15 07:20:50.084954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.285 [2024-07-15 07:20:50.084964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.285 [2024-07-15 07:20:50.084974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:41.285 [2024-07-15 07:20:50.084993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb55860 (9): Bad file descriptor 00:16:41.285 [2024-07-15 07:20:50.085792] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:41.285 [2024-07-15 07:20:50.085818] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:41.285 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.543 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:41.543 07:20:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:42.478 07:20:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:43.415 [2024-07-15 07:20:52.089568] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:43.415 [2024-07-15 07:20:52.089611] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:43.415 [2024-07-15 07:20:52.089631] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:43.415 [2024-07-15 07:20:52.095607] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:43.415 [2024-07-15 07:20:52.151884] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:43.415 [2024-07-15 07:20:52.151961] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:43.415 [2024-07-15 07:20:52.151988] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:43.415 [2024-07-15 07:20:52.152009] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:43.415 [2024-07-15 07:20:52.152020] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:43.415 [2024-07-15 07:20:52.158379] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbf8d90 was disconnected and freed. delete nvme_qpair. 00:16:43.415 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:43.415 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.416 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.416 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:43.416 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:43.416 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.416 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:43.416 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77587 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77587 ']' 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77587 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77587 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77587' 00:16:43.675 killing process with pid 77587 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77587 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77587 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.675 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.934 rmmod nvme_tcp 00:16:43.934 rmmod nvme_fabrics 00:16:43.934 rmmod nvme_keyring 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77558 ']' 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77558 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77558 ']' 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77558 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77558 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:43.934 killing process with pid 77558 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77558' 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77558 00:16:43.934 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77558 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:44.194 00:16:44.194 real 0m14.132s 00:16:44.194 user 0m24.527s 00:16:44.194 sys 0m2.348s 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.194 07:20:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.194 ************************************ 00:16:44.194 END TEST nvmf_discovery_remove_ifc 00:16:44.194 ************************************ 00:16:44.194 07:20:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:44.194 07:20:53 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:44.194 07:20:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:44.194 07:20:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.194 07:20:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.194 ************************************ 00:16:44.194 START TEST nvmf_identify_kernel_target 00:16:44.194 ************************************ 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:44.194 * Looking for test storage... 00:16:44.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.194 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:44.460 Cannot find device "nvmf_tgt_br" 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.460 Cannot find device "nvmf_tgt_br2" 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:44.460 Cannot find device "nvmf_tgt_br" 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:44.460 Cannot find device "nvmf_tgt_br2" 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.460 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:44.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:16:44.722 00:16:44.722 --- 10.0.0.2 ping statistics --- 00:16:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.722 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:44.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:44.722 00:16:44.722 --- 10.0.0.3 ping statistics --- 00:16:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.722 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:44.722 00:16:44.722 --- 10.0.0.1 ping statistics --- 00:16:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.722 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:44.722 07:20:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:44.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:44.980 Waiting for block devices as requested 00:16:44.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:45.238 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:45.238 No valid GPT data, bailing 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:45.238 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:45.497 No valid GPT data, bailing 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:45.497 No valid GPT data, bailing 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:45.497 No valid GPT data, bailing 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:45.497 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:45.498 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -a 10.0.0.1 -t tcp -s 4420 00:16:45.757 00:16:45.757 Discovery Log Number of Records 2, Generation counter 2 00:16:45.757 =====Discovery Log Entry 0====== 00:16:45.757 trtype: tcp 00:16:45.757 adrfam: ipv4 00:16:45.757 subtype: current discovery subsystem 00:16:45.757 treq: not specified, sq flow control disable supported 00:16:45.757 portid: 1 00:16:45.757 trsvcid: 4420 00:16:45.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:45.757 traddr: 10.0.0.1 00:16:45.757 eflags: none 00:16:45.757 sectype: none 00:16:45.757 =====Discovery Log Entry 1====== 00:16:45.757 trtype: tcp 00:16:45.757 adrfam: ipv4 00:16:45.757 subtype: nvme subsystem 00:16:45.757 treq: not specified, sq flow control disable supported 00:16:45.757 portid: 1 00:16:45.757 trsvcid: 4420 00:16:45.757 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:45.757 traddr: 10.0.0.1 00:16:45.757 eflags: none 00:16:45.757 sectype: none 00:16:45.757 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:45.757 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:45.757 ===================================================== 00:16:45.757 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:45.757 ===================================================== 00:16:45.757 Controller Capabilities/Features 00:16:45.757 ================================ 00:16:45.757 Vendor ID: 0000 00:16:45.757 Subsystem Vendor ID: 0000 00:16:45.757 Serial Number: 8e54de0590a5bccee6c1 00:16:45.757 Model Number: Linux 00:16:45.757 Firmware Version: 6.7.0-68 00:16:45.757 Recommended Arb Burst: 0 00:16:45.757 IEEE OUI Identifier: 00 00 00 00:16:45.757 Multi-path I/O 00:16:45.757 May have multiple subsystem ports: No 00:16:45.757 May have multiple controllers: No 00:16:45.757 Associated with SR-IOV VF: No 00:16:45.757 Max Data Transfer Size: Unlimited 00:16:45.757 Max Number of Namespaces: 0 00:16:45.757 Max Number of I/O Queues: 1024 00:16:45.757 NVMe Specification Version (VS): 1.3 00:16:45.757 NVMe Specification Version (Identify): 1.3 00:16:45.757 Maximum Queue Entries: 1024 00:16:45.757 Contiguous Queues Required: No 00:16:45.757 Arbitration Mechanisms Supported 00:16:45.757 Weighted Round Robin: Not Supported 00:16:45.757 Vendor Specific: Not Supported 00:16:45.757 Reset Timeout: 7500 ms 00:16:45.757 Doorbell Stride: 4 bytes 00:16:45.757 NVM Subsystem Reset: Not Supported 00:16:45.757 Command Sets Supported 00:16:45.757 NVM Command Set: Supported 00:16:45.757 Boot Partition: Not Supported 00:16:45.757 Memory Page Size Minimum: 4096 bytes 00:16:45.757 Memory Page Size Maximum: 4096 bytes 00:16:45.757 Persistent Memory Region: Not Supported 00:16:45.757 Optional Asynchronous Events Supported 00:16:45.757 Namespace Attribute Notices: Not Supported 00:16:45.757 Firmware Activation Notices: Not Supported 00:16:45.757 ANA Change Notices: Not Supported 00:16:45.757 PLE Aggregate Log Change Notices: Not Supported 00:16:45.757 LBA Status Info Alert Notices: Not Supported 00:16:45.757 EGE Aggregate Log Change Notices: Not Supported 00:16:45.757 Normal NVM Subsystem Shutdown event: Not Supported 00:16:45.757 Zone Descriptor Change Notices: Not Supported 00:16:45.758 Discovery Log Change Notices: Supported 00:16:45.758 Controller Attributes 00:16:45.758 128-bit Host Identifier: Not Supported 00:16:45.758 Non-Operational Permissive Mode: Not Supported 00:16:45.758 NVM Sets: Not Supported 00:16:45.758 Read Recovery Levels: Not Supported 00:16:45.758 Endurance Groups: Not Supported 00:16:45.758 Predictable Latency Mode: Not Supported 00:16:45.758 Traffic Based Keep ALive: Not Supported 00:16:45.758 Namespace Granularity: Not Supported 00:16:45.758 SQ Associations: Not Supported 00:16:45.758 UUID List: Not Supported 00:16:45.758 Multi-Domain Subsystem: Not Supported 00:16:45.758 Fixed Capacity Management: Not Supported 00:16:45.758 Variable Capacity Management: Not Supported 00:16:45.758 Delete Endurance Group: Not Supported 00:16:45.758 Delete NVM Set: Not Supported 00:16:45.758 Extended LBA Formats Supported: Not Supported 00:16:45.758 Flexible Data Placement Supported: Not Supported 00:16:45.758 00:16:45.758 Controller Memory Buffer Support 00:16:45.758 ================================ 00:16:45.758 Supported: No 00:16:45.758 00:16:45.758 Persistent Memory Region Support 00:16:45.758 ================================ 00:16:45.758 Supported: No 00:16:45.758 00:16:45.758 Admin Command Set Attributes 00:16:45.758 ============================ 00:16:45.758 Security Send/Receive: Not Supported 00:16:45.758 Format NVM: Not Supported 00:16:45.758 Firmware Activate/Download: Not Supported 00:16:45.758 Namespace Management: Not Supported 00:16:45.758 Device Self-Test: Not Supported 00:16:45.758 Directives: Not Supported 00:16:45.758 NVMe-MI: Not Supported 00:16:45.758 Virtualization Management: Not Supported 00:16:45.758 Doorbell Buffer Config: Not Supported 00:16:45.758 Get LBA Status Capability: Not Supported 00:16:45.758 Command & Feature Lockdown Capability: Not Supported 00:16:45.758 Abort Command Limit: 1 00:16:45.758 Async Event Request Limit: 1 00:16:45.758 Number of Firmware Slots: N/A 00:16:45.758 Firmware Slot 1 Read-Only: N/A 00:16:45.758 Firmware Activation Without Reset: N/A 00:16:45.758 Multiple Update Detection Support: N/A 00:16:45.758 Firmware Update Granularity: No Information Provided 00:16:45.758 Per-Namespace SMART Log: No 00:16:45.758 Asymmetric Namespace Access Log Page: Not Supported 00:16:45.758 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:45.758 Command Effects Log Page: Not Supported 00:16:45.758 Get Log Page Extended Data: Supported 00:16:45.758 Telemetry Log Pages: Not Supported 00:16:45.758 Persistent Event Log Pages: Not Supported 00:16:45.758 Supported Log Pages Log Page: May Support 00:16:45.758 Commands Supported & Effects Log Page: Not Supported 00:16:45.758 Feature Identifiers & Effects Log Page:May Support 00:16:45.758 NVMe-MI Commands & Effects Log Page: May Support 00:16:45.758 Data Area 4 for Telemetry Log: Not Supported 00:16:45.758 Error Log Page Entries Supported: 1 00:16:45.758 Keep Alive: Not Supported 00:16:45.758 00:16:45.758 NVM Command Set Attributes 00:16:45.758 ========================== 00:16:45.758 Submission Queue Entry Size 00:16:45.758 Max: 1 00:16:45.758 Min: 1 00:16:45.758 Completion Queue Entry Size 00:16:45.758 Max: 1 00:16:45.758 Min: 1 00:16:45.758 Number of Namespaces: 0 00:16:45.758 Compare Command: Not Supported 00:16:45.758 Write Uncorrectable Command: Not Supported 00:16:45.758 Dataset Management Command: Not Supported 00:16:45.758 Write Zeroes Command: Not Supported 00:16:45.758 Set Features Save Field: Not Supported 00:16:45.758 Reservations: Not Supported 00:16:45.758 Timestamp: Not Supported 00:16:45.758 Copy: Not Supported 00:16:45.758 Volatile Write Cache: Not Present 00:16:45.758 Atomic Write Unit (Normal): 1 00:16:45.758 Atomic Write Unit (PFail): 1 00:16:45.758 Atomic Compare & Write Unit: 1 00:16:45.758 Fused Compare & Write: Not Supported 00:16:45.758 Scatter-Gather List 00:16:45.758 SGL Command Set: Supported 00:16:45.758 SGL Keyed: Not Supported 00:16:45.758 SGL Bit Bucket Descriptor: Not Supported 00:16:45.758 SGL Metadata Pointer: Not Supported 00:16:45.758 Oversized SGL: Not Supported 00:16:45.758 SGL Metadata Address: Not Supported 00:16:45.758 SGL Offset: Supported 00:16:45.758 Transport SGL Data Block: Not Supported 00:16:45.758 Replay Protected Memory Block: Not Supported 00:16:45.758 00:16:45.758 Firmware Slot Information 00:16:45.758 ========================= 00:16:45.758 Active slot: 0 00:16:45.758 00:16:45.758 00:16:45.758 Error Log 00:16:45.758 ========= 00:16:45.758 00:16:45.758 Active Namespaces 00:16:45.758 ================= 00:16:45.758 Discovery Log Page 00:16:45.758 ================== 00:16:45.758 Generation Counter: 2 00:16:45.758 Number of Records: 2 00:16:45.758 Record Format: 0 00:16:45.758 00:16:45.758 Discovery Log Entry 0 00:16:45.758 ---------------------- 00:16:45.758 Transport Type: 3 (TCP) 00:16:45.758 Address Family: 1 (IPv4) 00:16:45.758 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:45.758 Entry Flags: 00:16:45.758 Duplicate Returned Information: 0 00:16:45.758 Explicit Persistent Connection Support for Discovery: 0 00:16:45.758 Transport Requirements: 00:16:45.758 Secure Channel: Not Specified 00:16:45.758 Port ID: 1 (0x0001) 00:16:45.758 Controller ID: 65535 (0xffff) 00:16:45.758 Admin Max SQ Size: 32 00:16:45.758 Transport Service Identifier: 4420 00:16:45.758 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:45.758 Transport Address: 10.0.0.1 00:16:45.758 Discovery Log Entry 1 00:16:45.758 ---------------------- 00:16:45.758 Transport Type: 3 (TCP) 00:16:45.758 Address Family: 1 (IPv4) 00:16:45.758 Subsystem Type: 2 (NVM Subsystem) 00:16:45.758 Entry Flags: 00:16:45.758 Duplicate Returned Information: 0 00:16:45.758 Explicit Persistent Connection Support for Discovery: 0 00:16:45.758 Transport Requirements: 00:16:45.758 Secure Channel: Not Specified 00:16:45.758 Port ID: 1 (0x0001) 00:16:45.758 Controller ID: 65535 (0xffff) 00:16:45.758 Admin Max SQ Size: 32 00:16:45.758 Transport Service Identifier: 4420 00:16:45.758 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:45.758 Transport Address: 10.0.0.1 00:16:45.758 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:46.018 get_feature(0x01) failed 00:16:46.018 get_feature(0x02) failed 00:16:46.018 get_feature(0x04) failed 00:16:46.018 ===================================================== 00:16:46.018 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:46.018 ===================================================== 00:16:46.018 Controller Capabilities/Features 00:16:46.018 ================================ 00:16:46.018 Vendor ID: 0000 00:16:46.018 Subsystem Vendor ID: 0000 00:16:46.018 Serial Number: e577f0ecc0f28e5ba359 00:16:46.018 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:46.018 Firmware Version: 6.7.0-68 00:16:46.018 Recommended Arb Burst: 6 00:16:46.018 IEEE OUI Identifier: 00 00 00 00:16:46.018 Multi-path I/O 00:16:46.018 May have multiple subsystem ports: Yes 00:16:46.018 May have multiple controllers: Yes 00:16:46.018 Associated with SR-IOV VF: No 00:16:46.018 Max Data Transfer Size: Unlimited 00:16:46.018 Max Number of Namespaces: 1024 00:16:46.018 Max Number of I/O Queues: 128 00:16:46.018 NVMe Specification Version (VS): 1.3 00:16:46.018 NVMe Specification Version (Identify): 1.3 00:16:46.018 Maximum Queue Entries: 1024 00:16:46.018 Contiguous Queues Required: No 00:16:46.018 Arbitration Mechanisms Supported 00:16:46.018 Weighted Round Robin: Not Supported 00:16:46.018 Vendor Specific: Not Supported 00:16:46.018 Reset Timeout: 7500 ms 00:16:46.018 Doorbell Stride: 4 bytes 00:16:46.018 NVM Subsystem Reset: Not Supported 00:16:46.018 Command Sets Supported 00:16:46.018 NVM Command Set: Supported 00:16:46.018 Boot Partition: Not Supported 00:16:46.018 Memory Page Size Minimum: 4096 bytes 00:16:46.018 Memory Page Size Maximum: 4096 bytes 00:16:46.018 Persistent Memory Region: Not Supported 00:16:46.018 Optional Asynchronous Events Supported 00:16:46.018 Namespace Attribute Notices: Supported 00:16:46.018 Firmware Activation Notices: Not Supported 00:16:46.018 ANA Change Notices: Supported 00:16:46.018 PLE Aggregate Log Change Notices: Not Supported 00:16:46.018 LBA Status Info Alert Notices: Not Supported 00:16:46.018 EGE Aggregate Log Change Notices: Not Supported 00:16:46.018 Normal NVM Subsystem Shutdown event: Not Supported 00:16:46.018 Zone Descriptor Change Notices: Not Supported 00:16:46.018 Discovery Log Change Notices: Not Supported 00:16:46.018 Controller Attributes 00:16:46.018 128-bit Host Identifier: Supported 00:16:46.018 Non-Operational Permissive Mode: Not Supported 00:16:46.018 NVM Sets: Not Supported 00:16:46.018 Read Recovery Levels: Not Supported 00:16:46.018 Endurance Groups: Not Supported 00:16:46.018 Predictable Latency Mode: Not Supported 00:16:46.018 Traffic Based Keep ALive: Supported 00:16:46.018 Namespace Granularity: Not Supported 00:16:46.018 SQ Associations: Not Supported 00:16:46.018 UUID List: Not Supported 00:16:46.018 Multi-Domain Subsystem: Not Supported 00:16:46.018 Fixed Capacity Management: Not Supported 00:16:46.018 Variable Capacity Management: Not Supported 00:16:46.018 Delete Endurance Group: Not Supported 00:16:46.018 Delete NVM Set: Not Supported 00:16:46.018 Extended LBA Formats Supported: Not Supported 00:16:46.018 Flexible Data Placement Supported: Not Supported 00:16:46.018 00:16:46.018 Controller Memory Buffer Support 00:16:46.018 ================================ 00:16:46.018 Supported: No 00:16:46.018 00:16:46.018 Persistent Memory Region Support 00:16:46.018 ================================ 00:16:46.018 Supported: No 00:16:46.018 00:16:46.018 Admin Command Set Attributes 00:16:46.018 ============================ 00:16:46.018 Security Send/Receive: Not Supported 00:16:46.018 Format NVM: Not Supported 00:16:46.018 Firmware Activate/Download: Not Supported 00:16:46.018 Namespace Management: Not Supported 00:16:46.018 Device Self-Test: Not Supported 00:16:46.018 Directives: Not Supported 00:16:46.018 NVMe-MI: Not Supported 00:16:46.018 Virtualization Management: Not Supported 00:16:46.018 Doorbell Buffer Config: Not Supported 00:16:46.018 Get LBA Status Capability: Not Supported 00:16:46.018 Command & Feature Lockdown Capability: Not Supported 00:16:46.018 Abort Command Limit: 4 00:16:46.018 Async Event Request Limit: 4 00:16:46.018 Number of Firmware Slots: N/A 00:16:46.018 Firmware Slot 1 Read-Only: N/A 00:16:46.018 Firmware Activation Without Reset: N/A 00:16:46.018 Multiple Update Detection Support: N/A 00:16:46.018 Firmware Update Granularity: No Information Provided 00:16:46.018 Per-Namespace SMART Log: Yes 00:16:46.018 Asymmetric Namespace Access Log Page: Supported 00:16:46.018 ANA Transition Time : 10 sec 00:16:46.018 00:16:46.018 Asymmetric Namespace Access Capabilities 00:16:46.018 ANA Optimized State : Supported 00:16:46.018 ANA Non-Optimized State : Supported 00:16:46.018 ANA Inaccessible State : Supported 00:16:46.018 ANA Persistent Loss State : Supported 00:16:46.018 ANA Change State : Supported 00:16:46.018 ANAGRPID is not changed : No 00:16:46.018 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:46.018 00:16:46.018 ANA Group Identifier Maximum : 128 00:16:46.018 Number of ANA Group Identifiers : 128 00:16:46.018 Max Number of Allowed Namespaces : 1024 00:16:46.018 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:46.018 Command Effects Log Page: Supported 00:16:46.018 Get Log Page Extended Data: Supported 00:16:46.018 Telemetry Log Pages: Not Supported 00:16:46.018 Persistent Event Log Pages: Not Supported 00:16:46.018 Supported Log Pages Log Page: May Support 00:16:46.018 Commands Supported & Effects Log Page: Not Supported 00:16:46.018 Feature Identifiers & Effects Log Page:May Support 00:16:46.018 NVMe-MI Commands & Effects Log Page: May Support 00:16:46.018 Data Area 4 for Telemetry Log: Not Supported 00:16:46.018 Error Log Page Entries Supported: 128 00:16:46.018 Keep Alive: Supported 00:16:46.018 Keep Alive Granularity: 1000 ms 00:16:46.018 00:16:46.018 NVM Command Set Attributes 00:16:46.018 ========================== 00:16:46.018 Submission Queue Entry Size 00:16:46.018 Max: 64 00:16:46.018 Min: 64 00:16:46.018 Completion Queue Entry Size 00:16:46.018 Max: 16 00:16:46.018 Min: 16 00:16:46.018 Number of Namespaces: 1024 00:16:46.018 Compare Command: Not Supported 00:16:46.018 Write Uncorrectable Command: Not Supported 00:16:46.018 Dataset Management Command: Supported 00:16:46.018 Write Zeroes Command: Supported 00:16:46.018 Set Features Save Field: Not Supported 00:16:46.018 Reservations: Not Supported 00:16:46.018 Timestamp: Not Supported 00:16:46.018 Copy: Not Supported 00:16:46.018 Volatile Write Cache: Present 00:16:46.018 Atomic Write Unit (Normal): 1 00:16:46.018 Atomic Write Unit (PFail): 1 00:16:46.018 Atomic Compare & Write Unit: 1 00:16:46.018 Fused Compare & Write: Not Supported 00:16:46.018 Scatter-Gather List 00:16:46.018 SGL Command Set: Supported 00:16:46.018 SGL Keyed: Not Supported 00:16:46.018 SGL Bit Bucket Descriptor: Not Supported 00:16:46.018 SGL Metadata Pointer: Not Supported 00:16:46.018 Oversized SGL: Not Supported 00:16:46.018 SGL Metadata Address: Not Supported 00:16:46.018 SGL Offset: Supported 00:16:46.018 Transport SGL Data Block: Not Supported 00:16:46.018 Replay Protected Memory Block: Not Supported 00:16:46.018 00:16:46.018 Firmware Slot Information 00:16:46.018 ========================= 00:16:46.018 Active slot: 0 00:16:46.018 00:16:46.018 Asymmetric Namespace Access 00:16:46.018 =========================== 00:16:46.018 Change Count : 0 00:16:46.018 Number of ANA Group Descriptors : 1 00:16:46.018 ANA Group Descriptor : 0 00:16:46.018 ANA Group ID : 1 00:16:46.018 Number of NSID Values : 1 00:16:46.018 Change Count : 0 00:16:46.018 ANA State : 1 00:16:46.018 Namespace Identifier : 1 00:16:46.018 00:16:46.018 Commands Supported and Effects 00:16:46.018 ============================== 00:16:46.018 Admin Commands 00:16:46.018 -------------- 00:16:46.018 Get Log Page (02h): Supported 00:16:46.018 Identify (06h): Supported 00:16:46.018 Abort (08h): Supported 00:16:46.018 Set Features (09h): Supported 00:16:46.018 Get Features (0Ah): Supported 00:16:46.018 Asynchronous Event Request (0Ch): Supported 00:16:46.018 Keep Alive (18h): Supported 00:16:46.018 I/O Commands 00:16:46.018 ------------ 00:16:46.018 Flush (00h): Supported 00:16:46.018 Write (01h): Supported LBA-Change 00:16:46.018 Read (02h): Supported 00:16:46.018 Write Zeroes (08h): Supported LBA-Change 00:16:46.018 Dataset Management (09h): Supported 00:16:46.018 00:16:46.018 Error Log 00:16:46.018 ========= 00:16:46.018 Entry: 0 00:16:46.018 Error Count: 0x3 00:16:46.018 Submission Queue Id: 0x0 00:16:46.018 Command Id: 0x5 00:16:46.018 Phase Bit: 0 00:16:46.018 Status Code: 0x2 00:16:46.019 Status Code Type: 0x0 00:16:46.019 Do Not Retry: 1 00:16:46.019 Error Location: 0x28 00:16:46.019 LBA: 0x0 00:16:46.019 Namespace: 0x0 00:16:46.019 Vendor Log Page: 0x0 00:16:46.019 ----------- 00:16:46.019 Entry: 1 00:16:46.019 Error Count: 0x2 00:16:46.019 Submission Queue Id: 0x0 00:16:46.019 Command Id: 0x5 00:16:46.019 Phase Bit: 0 00:16:46.019 Status Code: 0x2 00:16:46.019 Status Code Type: 0x0 00:16:46.019 Do Not Retry: 1 00:16:46.019 Error Location: 0x28 00:16:46.019 LBA: 0x0 00:16:46.019 Namespace: 0x0 00:16:46.019 Vendor Log Page: 0x0 00:16:46.019 ----------- 00:16:46.019 Entry: 2 00:16:46.019 Error Count: 0x1 00:16:46.019 Submission Queue Id: 0x0 00:16:46.019 Command Id: 0x4 00:16:46.019 Phase Bit: 0 00:16:46.019 Status Code: 0x2 00:16:46.019 Status Code Type: 0x0 00:16:46.019 Do Not Retry: 1 00:16:46.019 Error Location: 0x28 00:16:46.019 LBA: 0x0 00:16:46.019 Namespace: 0x0 00:16:46.019 Vendor Log Page: 0x0 00:16:46.019 00:16:46.019 Number of Queues 00:16:46.019 ================ 00:16:46.019 Number of I/O Submission Queues: 128 00:16:46.019 Number of I/O Completion Queues: 128 00:16:46.019 00:16:46.019 ZNS Specific Controller Data 00:16:46.019 ============================ 00:16:46.019 Zone Append Size Limit: 0 00:16:46.019 00:16:46.019 00:16:46.019 Active Namespaces 00:16:46.019 ================= 00:16:46.019 get_feature(0x05) failed 00:16:46.019 Namespace ID:1 00:16:46.019 Command Set Identifier: NVM (00h) 00:16:46.019 Deallocate: Supported 00:16:46.019 Deallocated/Unwritten Error: Not Supported 00:16:46.019 Deallocated Read Value: Unknown 00:16:46.019 Deallocate in Write Zeroes: Not Supported 00:16:46.019 Deallocated Guard Field: 0xFFFF 00:16:46.019 Flush: Supported 00:16:46.019 Reservation: Not Supported 00:16:46.019 Namespace Sharing Capabilities: Multiple Controllers 00:16:46.019 Size (in LBAs): 1310720 (5GiB) 00:16:46.019 Capacity (in LBAs): 1310720 (5GiB) 00:16:46.019 Utilization (in LBAs): 1310720 (5GiB) 00:16:46.019 UUID: 9e7e6a7b-1651-4f7a-81a8-aefe8b41a572 00:16:46.019 Thin Provisioning: Not Supported 00:16:46.019 Per-NS Atomic Units: Yes 00:16:46.019 Atomic Boundary Size (Normal): 0 00:16:46.019 Atomic Boundary Size (PFail): 0 00:16:46.019 Atomic Boundary Offset: 0 00:16:46.019 NGUID/EUI64 Never Reused: No 00:16:46.019 ANA group ID: 1 00:16:46.019 Namespace Write Protected: No 00:16:46.019 Number of LBA Formats: 1 00:16:46.019 Current LBA Format: LBA Format #00 00:16:46.019 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:46.019 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.019 rmmod nvme_tcp 00:16:46.019 rmmod nvme_fabrics 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:46.019 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:46.277 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:46.277 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:46.277 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:46.277 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:46.277 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:46.277 07:20:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:46.277 07:20:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:46.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:46.845 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:47.104 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:47.104 ************************************ 00:16:47.104 END TEST nvmf_identify_kernel_target 00:16:47.104 ************************************ 00:16:47.104 00:16:47.104 real 0m2.843s 00:16:47.104 user 0m1.023s 00:16:47.104 sys 0m1.291s 00:16:47.104 07:20:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.104 07:20:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.104 07:20:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:47.104 07:20:55 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:47.104 07:20:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:47.104 07:20:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.104 07:20:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.104 ************************************ 00:16:47.104 START TEST nvmf_auth_host 00:16:47.104 ************************************ 00:16:47.104 07:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:47.104 * Looking for test storage... 00:16:47.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.104 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.105 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:47.364 Cannot find device "nvmf_tgt_br" 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.364 Cannot find device "nvmf_tgt_br2" 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:47.364 Cannot find device "nvmf_tgt_br" 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:47.364 Cannot find device "nvmf_tgt_br2" 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:47.364 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:47.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:47.623 00:16:47.623 --- 10.0.0.2 ping statistics --- 00:16:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.623 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:47.623 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.623 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:47.623 00:16:47.623 --- 10.0.0.3 ping statistics --- 00:16:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.623 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:47.623 00:16:47.623 --- 10.0.0.1 ping statistics --- 00:16:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.623 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78480 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78480 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78480 ']' 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.623 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f1cfe73b449a1181188b8a1def6d469 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gab 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f1cfe73b449a1181188b8a1def6d469 0 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f1cfe73b449a1181188b8a1def6d469 0 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f1cfe73b449a1181188b8a1def6d469 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:47.881 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gab 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gab 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gab 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87934a29ffb8da8891813a3d1f883daf59884a4602349afbe30b740bd8075dfb 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.00S 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87934a29ffb8da8891813a3d1f883daf59884a4602349afbe30b740bd8075dfb 3 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87934a29ffb8da8891813a3d1f883daf59884a4602349afbe30b740bd8075dfb 3 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87934a29ffb8da8891813a3d1f883daf59884a4602349afbe30b740bd8075dfb 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.00S 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.00S 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.00S 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f2956ed34070e15be5b0285a7708da412a2ed6498cffc830 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.73Y 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f2956ed34070e15be5b0285a7708da412a2ed6498cffc830 0 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f2956ed34070e15be5b0285a7708da412a2ed6498cffc830 0 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f2956ed34070e15be5b0285a7708da412a2ed6498cffc830 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.73Y 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.73Y 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.73Y 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d9836fb489e8d558bd10dd1cd5558931f03bdce1e7f6c1e3 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.BQp 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d9836fb489e8d558bd10dd1cd5558931f03bdce1e7f6c1e3 2 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d9836fb489e8d558bd10dd1cd5558931f03bdce1e7f6c1e3 2 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d9836fb489e8d558bd10dd1cd5558931f03bdce1e7f6c1e3 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:48.141 07:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.BQp 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.BQp 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.BQp 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f072daf77558fb435229058ce58996fc 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JGV 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f072daf77558fb435229058ce58996fc 1 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f072daf77558fb435229058ce58996fc 1 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f072daf77558fb435229058ce58996fc 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:48.141 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JGV 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JGV 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.JGV 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cf0267838e0f48edaf7478daec6b83f7 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PdV 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cf0267838e0f48edaf7478daec6b83f7 1 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cf0267838e0f48edaf7478daec6b83f7 1 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cf0267838e0f48edaf7478daec6b83f7 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PdV 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PdV 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.PdV 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f4bc9a08fa9d2fb03678dc67be868cebeeb8e480cd7d885f 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.o6D 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f4bc9a08fa9d2fb03678dc67be868cebeeb8e480cd7d885f 2 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f4bc9a08fa9d2fb03678dc67be868cebeeb8e480cd7d885f 2 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f4bc9a08fa9d2fb03678dc67be868cebeeb8e480cd7d885f 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.o6D 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.o6D 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.o6D 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a01c9eb23e91edd58ea00c6bddbf0235 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RvU 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a01c9eb23e91edd58ea00c6bddbf0235 0 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a01c9eb23e91edd58ea00c6bddbf0235 0 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a01c9eb23e91edd58ea00c6bddbf0235 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RvU 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RvU 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.RvU 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=435658c24dd62114694d8ea93196bd0deca1c90b079ec3dab5e741cc7f8c386c 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SOW 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 435658c24dd62114694d8ea93196bd0deca1c90b079ec3dab5e741cc7f8c386c 3 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 435658c24dd62114694d8ea93196bd0deca1c90b079ec3dab5e741cc7f8c386c 3 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=435658c24dd62114694d8ea93196bd0deca1c90b079ec3dab5e741cc7f8c386c 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:48.400 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:48.658 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SOW 00:16:48.658 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SOW 00:16:48.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.SOW 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78480 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78480 ']' 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.659 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gab 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.00S ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.00S 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.73Y 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.BQp ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BQp 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.JGV 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.PdV ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PdV 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.o6D 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.917 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.RvU ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.RvU 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.SOW 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:48.918 07:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:49.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:49.435 Waiting for block devices as requested 00:16:49.435 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:49.435 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:50.002 No valid GPT data, bailing 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:50.002 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:50.260 No valid GPT data, bailing 00:16:50.260 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:50.260 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:50.260 07:20:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:50.260 07:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:50.260 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:50.260 No valid GPT data, bailing 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:50.261 No valid GPT data, bailing 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -a 10.0.0.1 -t tcp -s 4420 00:16:50.261 00:16:50.261 Discovery Log Number of Records 2, Generation counter 2 00:16:50.261 =====Discovery Log Entry 0====== 00:16:50.261 trtype: tcp 00:16:50.261 adrfam: ipv4 00:16:50.261 subtype: current discovery subsystem 00:16:50.261 treq: not specified, sq flow control disable supported 00:16:50.261 portid: 1 00:16:50.261 trsvcid: 4420 00:16:50.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:50.261 traddr: 10.0.0.1 00:16:50.261 eflags: none 00:16:50.261 sectype: none 00:16:50.261 =====Discovery Log Entry 1====== 00:16:50.261 trtype: tcp 00:16:50.261 adrfam: ipv4 00:16:50.261 subtype: nvme subsystem 00:16:50.261 treq: not specified, sq flow control disable supported 00:16:50.261 portid: 1 00:16:50.261 trsvcid: 4420 00:16:50.261 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:50.261 traddr: 10.0.0.1 00:16:50.261 eflags: none 00:16:50.261 sectype: none 00:16:50.261 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 nvme0n1 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.520 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.779 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.780 nvme0n1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.780 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.039 nvme0n1 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.039 nvme0n1 00:16:51.039 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.298 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.298 07:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.298 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.298 07:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 nvme0n1 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.298 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.557 nvme0n1 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.557 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.815 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.074 nvme0n1 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.074 07:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.333 nvme0n1 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.333 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.334 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.592 nvme0n1 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.592 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.593 nvme0n1 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.593 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.851 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.852 nvme0n1 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.852 07:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 nvme0n1 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:53.787 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.788 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 nvme0n1 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.046 07:21:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 nvme0n1 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.304 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.562 nvme0n1 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.562 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 nvme0n1 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.820 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.079 07:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.993 nvme0n1 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.993 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.278 07:21:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.278 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.537 nvme0n1 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.537 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.103 nvme0n1 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:58.103 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.104 07:21:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 nvme0n1 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.362 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.363 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.620 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.620 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.620 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.620 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:58.620 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.620 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.879 nvme0n1 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.879 07:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.443 nvme0n1 00:16:59.443 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.443 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.443 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.443 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.443 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.443 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.700 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.701 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.701 07:21:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.701 07:21:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.701 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.701 07:21:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.264 nvme0n1 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.264 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 nvme0n1 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.198 07:21:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.764 nvme0n1 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.764 07:21:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.330 nvme0n1 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.330 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.331 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.588 nvme0n1 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:02.588 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.589 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.846 nvme0n1 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:02.846 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.847 nvme0n1 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.847 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.109 nvme0n1 00:17:03.109 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.110 07:21:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.110 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.110 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.110 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.367 nvme0n1 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.367 nvme0n1 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.367 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.628 nvme0n1 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.628 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.629 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.894 nvme0n1 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.894 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.152 nvme0n1 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.152 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.153 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.153 07:21:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.153 07:21:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.153 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.153 07:21:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.153 nvme0n1 00:17:04.153 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.153 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.153 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.153 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.153 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.153 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.413 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.413 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.413 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.413 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.413 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.413 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.414 nvme0n1 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.414 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.672 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.673 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.673 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.673 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.673 nvme0n1 00:17:04.673 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.673 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.930 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.931 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 nvme0n1 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.190 07:21:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.449 nvme0n1 00:17:05.449 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.449 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.449 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.449 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.449 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.449 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.450 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.709 nvme0n1 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.709 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.277 nvme0n1 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.277 07:21:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.277 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.536 nvme0n1 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.536 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 nvme0n1 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.103 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.104 07:21:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.362 nvme0n1 00:17:07.362 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.362 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.362 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.362 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.362 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.362 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.620 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.621 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.621 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.621 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.621 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.879 nvme0n1 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.879 07:21:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.138 07:21:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.138 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.138 07:21:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.706 nvme0n1 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.706 07:21:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.274 nvme0n1 00:17:09.274 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.274 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.274 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.274 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.274 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.274 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.545 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.123 nvme0n1 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.123 07:21:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.123 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.690 nvme0n1 00:17:10.690 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.690 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.690 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.690 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.690 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.690 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.950 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.951 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.951 07:21:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.951 07:21:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.951 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.951 07:21:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.517 nvme0n1 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.517 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.776 nvme0n1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.776 nvme0n1 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.776 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.035 nvme0n1 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:12.035 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.036 07:21:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.294 nvme0n1 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.294 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.295 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.553 nvme0n1 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.553 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.554 nvme0n1 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:12.554 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:12.812 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:12.812 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.812 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.812 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.813 nvme0n1 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.813 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.071 nvme0n1 00:17:13.071 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.071 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.071 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.071 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.071 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.071 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.072 07:21:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 nvme0n1 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.330 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.331 nvme0n1 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.331 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.590 nvme0n1 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.590 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.849 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.850 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.850 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.850 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.850 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.850 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.850 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.850 nvme0n1 00:17:13.850 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.110 07:21:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.372 nvme0n1 00:17:14.372 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.372 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.372 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.372 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.372 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.372 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.373 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.630 nvme0n1 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.630 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.888 nvme0n1 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.888 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.889 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.889 07:21:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.889 07:21:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.889 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.889 07:21:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.151 nvme0n1 00:17:15.151 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.151 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.151 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.151 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.151 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.151 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.416 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.417 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.675 nvme0n1 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.675 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.676 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.243 nvme0n1 00:17:16.243 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.243 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.243 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.243 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:16.244 07:21:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.244 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.502 nvme0n1 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.502 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.761 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.761 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.761 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.761 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.761 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.019 nvme0n1 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2YxY2ZlNzNiNDQ5YTExODExODhiOGExZGVmNmQ0NjkqsFkf: 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: ]] 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODc5MzRhMjlmZmI4ZGE4ODkxODEzYTNkMWY4ODNkYWY1OTg4NGE0NjAyMzQ5YWZiZTMwYjc0MGJkODA3NWRmYjt+dVA=: 00:17:17.019 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.020 07:21:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.587 nvme0n1 00:17:17.587 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.587 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.587 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.587 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.587 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.845 07:21:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.846 07:21:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.846 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.846 07:21:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.413 nvme0n1 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA3MmRhZjc3NTU4ZmI0MzUyMjkwNThjZTU4OTk2ZmM+rls8: 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2YwMjY3ODM4ZTBmNDhlZGFmNzQ3OGRhZWM2YjgzZjd1Yc3n: 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.413 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.414 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.414 07:21:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.414 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.414 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.414 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.348 nvme0n1 00:17:19.348 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.348 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.348 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.348 07:21:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.348 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.348 07:21:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjRiYzlhMDhmYTlkMmZiMDM2NzhkYzY3YmU4NjhjZWJlZWI4ZTQ4MGNkN2Q4ODVmmwglGQ==: 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTAxYzllYjIzZTkxZWRkNThlYTAwYzZiZGRiZjAyMzWoeuj8: 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.348 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.914 nvme0n1 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM1NjU4YzI0ZGQ2MjExNDY5NGQ4ZWE5MzE5NmJkMGRlY2ExYzkwYjA3OWVjM2RhYjVlNzQxY2M3ZjhjMzg2Y7vhrdc=: 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.914 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.915 07:21:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.481 nvme0n1 00:17:20.481 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.481 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.481 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.481 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.481 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.481 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjI5NTZlZDM0MDcwZTE1YmU1YjAyODVhNzcwOGRhNDEyYTJlZDY0OThjZmZjODMwzmuk2Q==: 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDk4MzZmYjQ4OWU4ZDU1OGJkMTBkZDFjZDU1NTg5MzFmMDNiZGNlMWU3ZjZjMWUzwOWmdQ==: 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.740 request: 00:17:20.740 { 00:17:20.740 "name": "nvme0", 00:17:20.740 "trtype": "tcp", 00:17:20.740 "traddr": "10.0.0.1", 00:17:20.740 "adrfam": "ipv4", 00:17:20.740 "trsvcid": "4420", 00:17:20.740 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:20.740 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:20.740 "prchk_reftag": false, 00:17:20.740 "prchk_guard": false, 00:17:20.740 "hdgst": false, 00:17:20.740 "ddgst": false, 00:17:20.740 "method": "bdev_nvme_attach_controller", 00:17:20.740 "req_id": 1 00:17:20.740 } 00:17:20.740 Got JSON-RPC error response 00:17:20.740 response: 00:17:20.740 { 00:17:20.740 "code": -5, 00:17:20.740 "message": "Input/output error" 00:17:20.740 } 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.740 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.741 request: 00:17:20.741 { 00:17:20.741 "name": "nvme0", 00:17:20.741 "trtype": "tcp", 00:17:20.741 "traddr": "10.0.0.1", 00:17:20.741 "adrfam": "ipv4", 00:17:20.741 "trsvcid": "4420", 00:17:20.741 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:20.741 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:20.741 "prchk_reftag": false, 00:17:20.741 "prchk_guard": false, 00:17:20.741 "hdgst": false, 00:17:20.741 "ddgst": false, 00:17:20.741 "dhchap_key": "key2", 00:17:20.741 "method": "bdev_nvme_attach_controller", 00:17:20.741 "req_id": 1 00:17:20.741 } 00:17:20.741 Got JSON-RPC error response 00:17:20.741 response: 00:17:20.741 { 00:17:20.741 "code": -5, 00:17:20.741 "message": "Input/output error" 00:17:20.741 } 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.741 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.999 request: 00:17:20.999 { 00:17:20.999 "name": "nvme0", 00:17:20.999 "trtype": "tcp", 00:17:20.999 "traddr": "10.0.0.1", 00:17:20.999 "adrfam": "ipv4", 00:17:20.999 "trsvcid": "4420", 00:17:20.999 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:20.999 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:20.999 "prchk_reftag": false, 00:17:20.999 "prchk_guard": false, 00:17:20.999 "hdgst": false, 00:17:20.999 "ddgst": false, 00:17:20.999 "dhchap_key": "key1", 00:17:20.999 "dhchap_ctrlr_key": "ckey2", 00:17:20.999 "method": "bdev_nvme_attach_controller", 00:17:20.999 "req_id": 1 00:17:20.999 } 00:17:20.999 Got JSON-RPC error response 00:17:20.999 response: 00:17:20.999 { 00:17:20.999 "code": -5, 00:17:20.999 "message": "Input/output error" 00:17:20.999 } 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.999 rmmod nvme_tcp 00:17:20.999 rmmod nvme_fabrics 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78480 ']' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78480 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78480 ']' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78480 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78480 00:17:20.999 killing process with pid 78480 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78480' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78480 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78480 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.999 07:21:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:21.280 07:21:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:21.280 07:21:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:21.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.105 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.105 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.106 07:21:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gab /tmp/spdk.key-null.73Y /tmp/spdk.key-sha256.JGV /tmp/spdk.key-sha384.o6D /tmp/spdk.key-sha512.SOW /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:22.106 07:21:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:22.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.364 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:22.364 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:22.624 00:17:22.624 real 0m35.405s 00:17:22.624 user 0m31.453s 00:17:22.624 sys 0m3.521s 00:17:22.624 ************************************ 00:17:22.624 END TEST nvmf_auth_host 00:17:22.624 ************************************ 00:17:22.624 07:21:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.624 07:21:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.624 07:21:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.624 07:21:31 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:22.624 07:21:31 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:22.624 07:21:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.624 07:21:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.624 07:21:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.624 ************************************ 00:17:22.624 START TEST nvmf_digest 00:17:22.624 ************************************ 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:22.624 * Looking for test storage... 00:17:22.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:22.624 Cannot find device "nvmf_tgt_br" 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.624 Cannot find device "nvmf_tgt_br2" 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:22.624 Cannot find device "nvmf_tgt_br" 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:22.624 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:22.883 Cannot find device "nvmf_tgt_br2" 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:22.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:22.883 00:17:22.883 --- 10.0.0.2 ping statistics --- 00:17:22.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.883 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:22.883 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.883 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:22.883 00:17:22.883 --- 10.0.0.3 ping statistics --- 00:17:22.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.883 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:22.883 00:17:22.883 --- 10.0.0.1 ping statistics --- 00:17:22.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.883 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:22.883 07:21:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:23.142 ************************************ 00:17:23.142 START TEST nvmf_digest_clean 00:17:23.142 ************************************ 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80048 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80048 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80048 ']' 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.142 07:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:23.142 [2024-07-15 07:21:31.940439] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:23.142 [2024-07-15 07:21:31.940764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.142 [2024-07-15 07:21:32.085028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.401 [2024-07-15 07:21:32.158619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.401 [2024-07-15 07:21:32.158687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.401 [2024-07-15 07:21:32.158714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.401 [2024-07-15 07:21:32.158724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.401 [2024-07-15 07:21:32.158733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.401 [2024-07-15 07:21:32.158784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.337 07:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:24.337 [2024-07-15 07:21:33.009307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:24.337 null0 00:17:24.337 [2024-07-15 07:21:33.043656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.337 [2024-07-15 07:21:33.067795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80080 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80080 /var/tmp/bperf.sock 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80080 ']' 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:24.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.337 07:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:24.337 [2024-07-15 07:21:33.125566] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:24.337 [2024-07-15 07:21:33.125903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80080 ] 00:17:24.337 [2024-07-15 07:21:33.260768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.595 [2024-07-15 07:21:33.348518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.531 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.531 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:25.531 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:25.531 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:25.531 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:25.531 [2024-07-15 07:21:34.471180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:25.789 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.789 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.047 nvme0n1 00:17:26.047 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:26.047 07:21:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:26.047 Running I/O for 2 seconds... 00:17:28.575 00:17:28.575 Latency(us) 00:17:28.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.575 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:28.575 nvme0n1 : 2.00 14509.93 56.68 0.00 0.00 8815.14 8043.05 20733.21 00:17:28.575 =================================================================================================================== 00:17:28.575 Total : 14509.93 56.68 0.00 0.00 8815.14 8043.05 20733.21 00:17:28.575 0 00:17:28.575 07:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:28.575 07:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:28.575 07:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:28.575 07:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:28.575 07:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:28.575 | select(.opcode=="crc32c") 00:17:28.575 | "\(.module_name) \(.executed)"' 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80080 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80080 ']' 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80080 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80080 00:17:28.575 killing process with pid 80080 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80080' 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80080 00:17:28.575 Received shutdown signal, test time was about 2.000000 seconds 00:17:28.575 00:17:28.575 Latency(us) 00:17:28.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.575 =================================================================================================================== 00:17:28.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80080 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80140 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80140 /var/tmp/bperf.sock 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80140 ']' 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:28.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.575 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:28.575 [2024-07-15 07:21:37.517474] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:28.575 [2024-07-15 07:21:37.517808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:28.575 Zero copy mechanism will not be used. 00:17:28.575 llocations --file-prefix=spdk_pid80140 ] 00:17:28.833 [2024-07-15 07:21:37.656742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.833 [2024-07-15 07:21:37.715794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.833 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.833 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:28.833 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:28.833 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:28.833 07:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:29.401 [2024-07-15 07:21:38.046498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:29.401 07:21:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.401 07:21:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.660 nvme0n1 00:17:29.660 07:21:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:29.660 07:21:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:29.660 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:29.660 Zero copy mechanism will not be used. 00:17:29.660 Running I/O for 2 seconds... 00:17:31.561 00:17:31.561 Latency(us) 00:17:31.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.561 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:31.561 nvme0n1 : 2.00 7054.99 881.87 0.00 0.00 2264.12 2070.34 7626.01 00:17:31.561 =================================================================================================================== 00:17:31.561 Total : 7054.99 881.87 0.00 0.00 2264.12 2070.34 7626.01 00:17:31.561 0 00:17:31.819 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:31.819 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:31.819 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:31.819 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:31.819 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:31.819 | select(.opcode=="crc32c") 00:17:31.819 | "\(.module_name) \(.executed)"' 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80140 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80140 ']' 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80140 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80140 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80140' 00:17:32.078 killing process with pid 80140 00:17:32.078 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.078 00:17:32.078 Latency(us) 00:17:32.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.078 =================================================================================================================== 00:17:32.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80140 00:17:32.078 07:21:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80140 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80193 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80193 /var/tmp/bperf.sock 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80193 ']' 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:32.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.336 07:21:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:32.336 [2024-07-15 07:21:41.098096] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:32.336 [2024-07-15 07:21:41.098187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80193 ] 00:17:32.336 [2024-07-15 07:21:41.238942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.594 [2024-07-15 07:21:41.298125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.530 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.530 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:33.530 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:33.531 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:33.531 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:33.531 [2024-07-15 07:21:42.436165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:33.531 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.531 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.176 nvme0n1 00:17:34.176 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:34.176 07:21:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:34.176 Running I/O for 2 seconds... 00:17:36.081 00:17:36.081 Latency(us) 00:17:36.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.081 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.081 nvme0n1 : 2.01 15509.68 60.58 0.00 0.00 8245.38 4974.78 15371.17 00:17:36.081 =================================================================================================================== 00:17:36.081 Total : 15509.68 60.58 0.00 0.00 8245.38 4974.78 15371.17 00:17:36.081 0 00:17:36.081 07:21:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:36.081 07:21:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:36.081 07:21:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:36.081 07:21:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:36.081 07:21:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:36.081 | select(.opcode=="crc32c") 00:17:36.081 | "\(.module_name) \(.executed)"' 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80193 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80193 ']' 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80193 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80193 00:17:36.340 killing process with pid 80193 00:17:36.340 Received shutdown signal, test time was about 2.000000 seconds 00:17:36.340 00:17:36.340 Latency(us) 00:17:36.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.340 =================================================================================================================== 00:17:36.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80193' 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80193 00:17:36.340 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80193 00:17:36.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80253 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80253 /var/tmp/bperf.sock 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80253 ']' 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.599 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:36.599 [2024-07-15 07:21:45.502907] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:36.599 [2024-07-15 07:21:45.503245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:36.599 Zero copy mechanism will not be used. 00:17:36.599 llocations --file-prefix=spdk_pid80253 ] 00:17:36.858 [2024-07-15 07:21:45.644404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.858 [2024-07-15 07:21:45.713652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.858 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.858 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:36.858 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:36.858 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:36.858 07:21:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:37.425 [2024-07-15 07:21:46.112727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:37.425 07:21:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.425 07:21:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.684 nvme0n1 00:17:37.684 07:21:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:37.684 07:21:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:37.684 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:37.684 Zero copy mechanism will not be used. 00:17:37.684 Running I/O for 2 seconds... 00:17:40.216 00:17:40.216 Latency(us) 00:17:40.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.216 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:40.216 nvme0n1 : 2.00 5939.89 742.49 0.00 0.00 2687.52 2070.34 8698.41 00:17:40.216 =================================================================================================================== 00:17:40.216 Total : 5939.89 742.49 0.00 0.00 2687.52 2070.34 8698.41 00:17:40.216 0 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:40.216 | select(.opcode=="crc32c") 00:17:40.216 | "\(.module_name) \(.executed)"' 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80253 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80253 ']' 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80253 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80253 00:17:40.216 killing process with pid 80253 00:17:40.216 Received shutdown signal, test time was about 2.000000 seconds 00:17:40.216 00:17:40.216 Latency(us) 00:17:40.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.216 =================================================================================================================== 00:17:40.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80253' 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80253 00:17:40.216 07:21:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80253 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80048 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80048 ']' 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80048 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80048 00:17:40.216 killing process with pid 80048 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80048' 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80048 00:17:40.216 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80048 00:17:40.476 00:17:40.476 real 0m17.437s 00:17:40.476 user 0m34.165s 00:17:40.476 sys 0m4.487s 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.476 ************************************ 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:40.476 END TEST nvmf_digest_clean 00:17:40.476 ************************************ 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:40.476 ************************************ 00:17:40.476 START TEST nvmf_digest_error 00:17:40.476 ************************************ 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80330 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80330 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80330 ']' 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.476 07:21:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.476 [2024-07-15 07:21:49.412367] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:40.476 [2024-07-15 07:21:49.412466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.735 [2024-07-15 07:21:49.552578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.735 [2024-07-15 07:21:49.621971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.735 [2024-07-15 07:21:49.622041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.735 [2024-07-15 07:21:49.622056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.735 [2024-07-15 07:21:49.622066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.735 [2024-07-15 07:21:49.622095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.735 [2024-07-15 07:21:49.622132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.669 [2024-07-15 07:21:50.462722] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.669 [2024-07-15 07:21:50.500357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:41.669 null0 00:17:41.669 [2024-07-15 07:21:50.534023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.669 [2024-07-15 07:21:50.558181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:41.669 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80362 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80362 /var/tmp/bperf.sock 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80362 ']' 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.670 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.670 [2024-07-15 07:21:50.618530] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:41.670 [2024-07-15 07:21:50.618843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80362 ] 00:17:41.927 [2024-07-15 07:21:50.753577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.927 [2024-07-15 07:21:50.811459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.927 [2024-07-15 07:21:50.840469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:42.185 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.185 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:42.185 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:42.185 07:21:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:42.443 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:42.443 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.443 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:42.443 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.443 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.443 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.701 nvme0n1 00:17:42.701 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:42.701 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.701 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:42.701 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.701 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:42.701 07:21:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:42.960 Running I/O for 2 seconds... 00:17:42.960 [2024-07-15 07:21:51.750001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.750062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.750094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.767532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.767579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.767594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.785180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.785226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.785241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.802778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.802820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.802850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.820343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.820382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.820396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.837917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.837957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.837971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.855370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.855413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.855428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.872836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.872885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.872900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.890330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.890372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.890386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.960 [2024-07-15 07:21:51.907767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:42.960 [2024-07-15 07:21:51.907807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.960 [2024-07-15 07:21:51.907821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.217 [2024-07-15 07:21:51.925287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.217 [2024-07-15 07:21:51.925326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.217 [2024-07-15 07:21:51.925339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.217 [2024-07-15 07:21:51.942722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.217 [2024-07-15 07:21:51.942763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.217 [2024-07-15 07:21:51.942777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.217 [2024-07-15 07:21:51.960234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.217 [2024-07-15 07:21:51.960281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:51.960295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:51.977745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:51.977790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:51.977804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:51.995714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:51.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:51.995771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.013332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.013372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.013386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.031265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.031303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.031317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.048711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.048751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.048765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.066345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.066384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.066398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.083732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.083774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.083789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.101189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.101229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.101244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.118589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.118628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.118641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.136044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.136098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.136115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.153485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.153525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.153548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.218 [2024-07-15 07:21:52.170869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.218 [2024-07-15 07:21:52.170910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.218 [2024-07-15 07:21:52.170924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.476 [2024-07-15 07:21:52.188300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.476 [2024-07-15 07:21:52.188338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.476 [2024-07-15 07:21:52.188352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.476 [2024-07-15 07:21:52.205663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.476 [2024-07-15 07:21:52.205709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.476 [2024-07-15 07:21:52.205722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.476 [2024-07-15 07:21:52.223041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.476 [2024-07-15 07:21:52.223094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.476 [2024-07-15 07:21:52.223108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.476 [2024-07-15 07:21:52.240393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.476 [2024-07-15 07:21:52.240431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.476 [2024-07-15 07:21:52.240445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.476 [2024-07-15 07:21:52.257783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.476 [2024-07-15 07:21:52.257824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.476 [2024-07-15 07:21:52.257838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.476 [2024-07-15 07:21:52.275170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.476 [2024-07-15 07:21:52.275210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.476 [2024-07-15 07:21:52.275223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.476 [2024-07-15 07:21:52.292817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.292867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.292882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.477 [2024-07-15 07:21:52.310574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.310620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.310635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.477 [2024-07-15 07:21:52.328088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.328129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.328144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.477 [2024-07-15 07:21:52.345510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.345563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.345578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.477 [2024-07-15 07:21:52.362944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.362991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.363006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.477 [2024-07-15 07:21:52.380633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.380681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.380696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.477 [2024-07-15 07:21:52.398185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.398228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.398242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.477 [2024-07-15 07:21:52.415580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.477 [2024-07-15 07:21:52.415621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.477 [2024-07-15 07:21:52.415636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.735 [2024-07-15 07:21:52.433023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.735 [2024-07-15 07:21:52.433065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.735 [2024-07-15 07:21:52.433092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.735 [2024-07-15 07:21:52.450428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.735 [2024-07-15 07:21:52.450470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.735 [2024-07-15 07:21:52.450484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.735 [2024-07-15 07:21:52.467817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.735 [2024-07-15 07:21:52.467859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.735 [2024-07-15 07:21:52.467873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.735 [2024-07-15 07:21:52.485362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.735 [2024-07-15 07:21:52.485405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.735 [2024-07-15 07:21:52.485419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.502853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.502896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.502911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.520453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.520528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.520544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.537898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.537939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.537953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.555260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.555299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.555313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.572895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.572968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.572984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.590505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.590552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.590567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.607892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.607935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.607949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.625326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.625367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.625382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.642772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.642818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.642832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.661519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.661589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.661604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.736 [2024-07-15 07:21:52.679643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.736 [2024-07-15 07:21:52.679706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.736 [2024-07-15 07:21:52.679722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.697539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.697598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.697613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.715205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.715262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.715279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.733296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.733359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.733375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.751031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.751094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.751111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.768612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.768666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.768686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.786092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.786139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.786153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.803464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.803509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.803523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.820868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.820914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.820928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.838328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.838369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.838384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.863338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.863384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.863398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.880733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.880776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.880790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.898120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.898160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.898174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.915605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.915679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.915695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.995 [2024-07-15 07:21:52.933179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:43.995 [2024-07-15 07:21:52.933226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.995 [2024-07-15 07:21:52.933240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:52.950647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:52.950688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:52.950703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:52.968041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:52.968105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:52.968120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:52.985482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:52.985540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:52.985555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.002979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.003030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.003045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.020568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.020617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.020632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.037983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.038025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.038038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.055389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.055431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.055445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.072833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.072873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.072886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.090267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.090328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.090343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.107834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.107880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.107895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.125313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.125356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.125371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.142684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.142731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.142746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.160120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.160166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.160180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.177641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.177703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.177718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.254 [2024-07-15 07:21:53.195128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.254 [2024-07-15 07:21:53.195168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.254 [2024-07-15 07:21:53.195182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.212614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.212679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.212695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.230129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.230173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.230187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.247478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.247518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.247531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.264854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.264896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.264910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.282291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.282330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.282344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.299651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.299692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.299705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.317134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.317189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.317204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.336135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.336186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.336202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.354100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.354147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.354163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.373168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.373215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.373229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.392596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.392650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.392665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.410426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.410477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.410492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.428049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.428109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.428125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.445598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.445650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.445665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.512 [2024-07-15 07:21:53.463200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.512 [2024-07-15 07:21:53.463254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.512 [2024-07-15 07:21:53.463268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.480781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.480826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.480840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.498293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.498337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.498352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.515860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.515908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.515923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.533315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.533363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.533378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.550848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.550894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.550909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.568341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.568388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.568402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.586010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.586090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.586108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.603422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.603464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.603478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.620828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.620874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.620888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.638409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.638457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.638472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.655934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.655995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.656009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.673549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.673592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.673606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.690970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.691013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.691027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-07-15 07:21:53.708409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:44.771 [2024-07-15 07:21:53.708453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-07-15 07:21:53.708468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.029 [2024-07-15 07:21:53.725455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e03020) 00:17:45.029 [2024-07-15 07:21:53.725504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.029 [2024-07-15 07:21:53.725519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.029 00:17:45.029 Latency(us) 00:17:45.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.029 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:45.029 nvme0n1 : 2.01 14372.06 56.14 0.00 0.00 8898.11 8281.37 33840.41 00:17:45.029 =================================================================================================================== 00:17:45.029 Total : 14372.06 56.14 0.00 0.00 8898.11 8281.37 33840.41 00:17:45.029 0 00:17:45.029 07:21:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:45.029 07:21:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:45.029 07:21:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:45.029 | .driver_specific 00:17:45.029 | .nvme_error 00:17:45.029 | .status_code 00:17:45.029 | .command_transient_transport_error' 00:17:45.029 07:21:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80362 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80362 ']' 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80362 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80362 00:17:45.287 killing process with pid 80362 00:17:45.287 Received shutdown signal, test time was about 2.000000 seconds 00:17:45.287 00:17:45.287 Latency(us) 00:17:45.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.287 =================================================================================================================== 00:17:45.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80362' 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80362 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80362 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:45.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:45.287 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80415 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80415 /var/tmp/bperf.sock 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80415 ']' 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.288 07:21:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.545 [2024-07-15 07:21:54.260432] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:45.545 [2024-07-15 07:21:54.260755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80415 ] 00:17:45.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:45.545 Zero copy mechanism will not be used. 00:17:45.545 [2024-07-15 07:21:54.395613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.545 [2024-07-15 07:21:54.453581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.545 [2024-07-15 07:21:54.482237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:46.479 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.479 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:46.479 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:46.479 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:46.737 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:46.737 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.737 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.737 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.737 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.737 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.995 nvme0n1 00:17:46.995 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:46.995 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.995 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.996 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.996 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:46.996 07:21:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:47.255 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:47.255 Zero copy mechanism will not be used. 00:17:47.255 Running I/O for 2 seconds... 00:17:47.255 [2024-07-15 07:21:55.979182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:55.979242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:55.979259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:55.983555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:55.983598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:55.983614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:55.988209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:55.988250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:55.988265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:55.992674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:55.992716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:55.992732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:55.997144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:55.997184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:55.997199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:56.001706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:56.001751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:56.001766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:56.006391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:56.006436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:56.006452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:56.010800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:56.010842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:56.010857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:56.015198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:56.015238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:56.015252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:56.019598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.255 [2024-07-15 07:21:56.019639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.255 [2024-07-15 07:21:56.019654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.255 [2024-07-15 07:21:56.024194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.024238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.024253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.028724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.028772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.028787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.033334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.033390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.033405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.037969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.038013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.038028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.042571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.042614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.042629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.047059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.047110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.047125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.051442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.051484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.051499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.055899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.055939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.055954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.060336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.060375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.060389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.064726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.064766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.064780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.069014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.069054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.069068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.073454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.073494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.073508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.077755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.077796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.077811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.082196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.082243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.082257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.086650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.086706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.091007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.091047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.091062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.095339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.095378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.095392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.099759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.099801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.099815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.104118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.104157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.104172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.108462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.108500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.108514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.112830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.112869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.112884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.117322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.117370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.117385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.121935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.121984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.121999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.126448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.126496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.126511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.130962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.131005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.131020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.135577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.135620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.135635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.140031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.140105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.144449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.144491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.144505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.148923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.256 [2024-07-15 07:21:56.148965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.256 [2024-07-15 07:21:56.148980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.256 [2024-07-15 07:21:56.153302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.153343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.153357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.157688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.157727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.157742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.162158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.162197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.162212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.166549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.166588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.166602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.170855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.170895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.170909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.175190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.175228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.175242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.179470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.179510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.179524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.183992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.184032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.184047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.188445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.188486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.188500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.192876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.192916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.192931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.197235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.197274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.197288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.201840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.201889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.201905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.257 [2024-07-15 07:21:56.206352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.257 [2024-07-15 07:21:56.206394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-15 07:21:56.206408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.210789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.210831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.210846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.215230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.215270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.215285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.219690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.219731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.219746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.224109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.224147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.224162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.228466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.228506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.228521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.232975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.233021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.233036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.237556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.237614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.237630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.242123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.242174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.242189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.246573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.246616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.246630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.516 [2024-07-15 07:21:56.251083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.516 [2024-07-15 07:21:56.251123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.516 [2024-07-15 07:21:56.251138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.255465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.255506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.255521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.259900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.259941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.259956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.264301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.264341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.264356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.268660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.268700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.268715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.272978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.273018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.273033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.277294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.277333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.277347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.281487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.281527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.281551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.285874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.285914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.285928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.290350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.290389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.290403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.294770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.294810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.294824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.299127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.299166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.299180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.303417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.303466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.303481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.307834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.307889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.307906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.312394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.312437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.312452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.316935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.316996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.317011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.321436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.321491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.321508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.325882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.325928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.325943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.330425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.330468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.330483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.334834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.334876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.334891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.339345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.339386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.339401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.343702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.343744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.343759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.348069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.348121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.348136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.352363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.352403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.352417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.356637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.356677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.356692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.361269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.361313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.361328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.365615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.365659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.365681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.517 [2024-07-15 07:21:56.370168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.517 [2024-07-15 07:21:56.370208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.517 [2024-07-15 07:21:56.370223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.374645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.374697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.374716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.379028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.379069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.379104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.383430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.383469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.383484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.387779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.387819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.387834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.392216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.392255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.392270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.396594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.396634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.396648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.400983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.401023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.401037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.406319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.406365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.406381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.410915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.410959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.410975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.415459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.415501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.415516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.419894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.419937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.419953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.424319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.424358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.424373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.428703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.428744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.428759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.433108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.433147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.433182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.437778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.437831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.437863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.442279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.442322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.442336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.446932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.446978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.446994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.451382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.451425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.451440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.456294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.456336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.456351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.460771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.460813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.460828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-15 07:21:56.465273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.518 [2024-07-15 07:21:56.465314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-15 07:21:56.465330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.469755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.469809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.469829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.474156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.474198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.474212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.478709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.478751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.478767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.483154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.483194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.483209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.487400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.487442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.487456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.491847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.491888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.491903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.496329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.496370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.496384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.500843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.500884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.500899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.505260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.505300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.505314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.509706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.509746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.509761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.514013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.514053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.514067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.518338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.518378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.518393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.522791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.522832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.522847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.527093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.527132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.527147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.531405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.531444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.531459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.535652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.535692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.535707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.540017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.540058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.540088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.544376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.544416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.544430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.548689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.778 [2024-07-15 07:21:56.548729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.778 [2024-07-15 07:21:56.548744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.778 [2024-07-15 07:21:56.553140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.553179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.553194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.557642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.557682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.557697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.562120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.562159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.562173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.566500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.566539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.566554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.570960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.571001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.571015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.575402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.575441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.575455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.579676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.579716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.579731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.583947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.583987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.584001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.588394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.588433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.588447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.592722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.592763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.592777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.597094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.597132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.597146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.601451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.601490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.601505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.605770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.605809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.605824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.610071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.610123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.610138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.614523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.614562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.614576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.619034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.619090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.619107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.623466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.623506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.623520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.627944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.627986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.628001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.632363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.632406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.632421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.636804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.636844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.636859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.641262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.641320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.641336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.645751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.645790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.645804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.650149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.650186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.650200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.654551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.654594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.654608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.659018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.659061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.659095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.663407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.663447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.663462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.667814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.667855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.667869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.672274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.672314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.672329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.676610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.779 [2024-07-15 07:21:56.676650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.779 [2024-07-15 07:21:56.676664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.779 [2024-07-15 07:21:56.680932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.680972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.680987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.685330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.685368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.685383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.689753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.689792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.689807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.694268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.694307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.694322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.698604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.698644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.698658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.703153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.703212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.707689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.707732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.707747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.712261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.712302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.712318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.716555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.716595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.716609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.720988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.721030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.721045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.725416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.725454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.725485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.780 [2024-07-15 07:21:56.729798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:47.780 [2024-07-15 07:21:56.729838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.780 [2024-07-15 07:21:56.729852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.039 [2024-07-15 07:21:56.734307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.039 [2024-07-15 07:21:56.734363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.039 [2024-07-15 07:21:56.734394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.039 [2024-07-15 07:21:56.738778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.039 [2024-07-15 07:21:56.738818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.039 [2024-07-15 07:21:56.738838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.039 [2024-07-15 07:21:56.743218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.039 [2024-07-15 07:21:56.743257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.039 [2024-07-15 07:21:56.743271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.039 [2024-07-15 07:21:56.747582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.039 [2024-07-15 07:21:56.747623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.039 [2024-07-15 07:21:56.747638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.039 [2024-07-15 07:21:56.752068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.039 [2024-07-15 07:21:56.752122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.039 [2024-07-15 07:21:56.752138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.039 [2024-07-15 07:21:56.756446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.039 [2024-07-15 07:21:56.756485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.039 [2024-07-15 07:21:56.756517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.760931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.760972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.760987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.765236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.765275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.765290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.769619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.769659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.769673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.774032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.774087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.774104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.778487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.778527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.778542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.782872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.782913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.782927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.787268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.787308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.787322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.791628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.791668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.791683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.796090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.796143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.796158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.800561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.800604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.800618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.804978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.805018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.805032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.809445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.809485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.809499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.813784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.813824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.813838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.818084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.818121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.818136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.822408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.822447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.822461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.826782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.826822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.826836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.831265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.831304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.831318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.835557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.835597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.835611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.840015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.840055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.840085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.844332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.844371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.844385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.848696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.848735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.848749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.853121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.853159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.853173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.857624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.857664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.857678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.862257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.862299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.862315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.866684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.866726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.866741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.871187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.871226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.040 [2024-07-15 07:21:56.871256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.040 [2024-07-15 07:21:56.875457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.040 [2024-07-15 07:21:56.875496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.875511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.879795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.879835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.879849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.884182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.884221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.884235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.888544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.888584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.888598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.892916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.892956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.892971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.897270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.897309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.897323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.901656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.901695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.901710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.905970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.906009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.906024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.910316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.910356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.910370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.914808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.914849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.914863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.919064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.919116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.919131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.923531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.923571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.923585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.927880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.927920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.927934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.932222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.932261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.932275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.936453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.936492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.936506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.940801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.940840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.940855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.945265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.945303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.945318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.949650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.949690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.949704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.954091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.954130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.954144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.958404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.958443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.958457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.962754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.962794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.962809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.967013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.967053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.967067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.971382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.971422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.971436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.975786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.975826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.975840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.980161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.980200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.980215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.984578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.984617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.984631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.041 [2024-07-15 07:21:56.989025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.041 [2024-07-15 07:21:56.989065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.041 [2024-07-15 07:21:56.989096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:56.993533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:56.993583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:56.993597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:56.997927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:56.997966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:56.997980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.002217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.002256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.002270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.006567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.006607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.006622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.011128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.011170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.011185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.015686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.015729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.015744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.020197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.020239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.024653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.024696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.024711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.029156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.029197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.029211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.033484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.033524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.033552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.037750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.037791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.037805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.042178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.042221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.042235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.046710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.046753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.046769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.050967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.051008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.051022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.055390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.055430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.055445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.059783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.059823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.059837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.064256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.064300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.064316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.301 [2024-07-15 07:21:57.068536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.301 [2024-07-15 07:21:57.068576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.301 [2024-07-15 07:21:57.068591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.073039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.073096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.073113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.077527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.077578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.077593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.081931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.081972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.081987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.086531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.086575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.086590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.090810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.090873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.090895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.095372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.095414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.095429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.099824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.099865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.099880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.104385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.104425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.104439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.108784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.108825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.108840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.113117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.113156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.113170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.117443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.117482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.117496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.121805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.121845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.121859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.126308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.126350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.126365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.130790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.130830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.130844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.135266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.135305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.135319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.139626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.139665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.139680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.143959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.143999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.144014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.148390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.148429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.148444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.152746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.152786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.152800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.157149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.157188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.157202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.161467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.161524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.161549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.165821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.165861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.165876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.170166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.170204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.170218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.174460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.174500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.174514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.178778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.178817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.178831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.183157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.183195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.183209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.187684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.187724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.187739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.302 [2024-07-15 07:21:57.192038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.302 [2024-07-15 07:21:57.192090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.302 [2024-07-15 07:21:57.192106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.196481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.196521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.196535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.200790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.200832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.200848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.205186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.205226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.205240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.209470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.209510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.209524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.213854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.213894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.213908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.218270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.218309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.218323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.222680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.222721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.222735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.227045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.227099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.227115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.231496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.231535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.235981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.236021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.236035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.240333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.240372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.240386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.244668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.244707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.244722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.303 [2024-07-15 07:21:57.249048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.303 [2024-07-15 07:21:57.249101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.303 [2024-07-15 07:21:57.249116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.253572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.562 [2024-07-15 07:21:57.253610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.562 [2024-07-15 07:21:57.253625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.258186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.562 [2024-07-15 07:21:57.258225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.562 [2024-07-15 07:21:57.258239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.262591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.562 [2024-07-15 07:21:57.262631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.562 [2024-07-15 07:21:57.262646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.267005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.562 [2024-07-15 07:21:57.267045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.562 [2024-07-15 07:21:57.267060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.271378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.562 [2024-07-15 07:21:57.271416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.562 [2024-07-15 07:21:57.271431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.275858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.562 [2024-07-15 07:21:57.275898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.562 [2024-07-15 07:21:57.275912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.280323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.562 [2024-07-15 07:21:57.280366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.562 [2024-07-15 07:21:57.280381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.562 [2024-07-15 07:21:57.284750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.284792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.284806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.289197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.289236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.289250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.293502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.293565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.293581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.297925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.297966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.297981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.302433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.302473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.302488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.306747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.306787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.306801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.311223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.311263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.311277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.315494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.315533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.315547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.319876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.319916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.319930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.324238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.324276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.324290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.328586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.328625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.328655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.333147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.333185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.333200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.337555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.337595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.337610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.342028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.342068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.342101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.346383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.346422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.346436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.350759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.350799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.350814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.355068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.355118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.355133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.359451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.359490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.359504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.363747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.363785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.363800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.368022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.368062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.368094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.372628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.372671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.372717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.377111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.377150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.377165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.381615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.381664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.381678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.386025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.386065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.386101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.391323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.391369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.391384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.395810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.395855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.395870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.400336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.400380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.404780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.563 [2024-07-15 07:21:57.404821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.563 [2024-07-15 07:21:57.404836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.563 [2024-07-15 07:21:57.409117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.409156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.409171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.413501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.413549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.413564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.418048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.418108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.418125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.422537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.422576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.422591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.427035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.427093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.427109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.431365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.431426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.431440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.435934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.435995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.436011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.440435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.440483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.440515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.444900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.444947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.444962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.449374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.449442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.449458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.453911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.453953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.453969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.458689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.458737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.458753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.463283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.463327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.463349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.467874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.467920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.467936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.472465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.472508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.472523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.477244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.477289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.477304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.481757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.481801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.481817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.486327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.486371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.486386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.490877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.490921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.490937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.495330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.495373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.495388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.499775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.499832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.499847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.504342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.504402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.504418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.508876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.508936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.508952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.564 [2024-07-15 07:21:57.513351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.564 [2024-07-15 07:21:57.513394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.564 [2024-07-15 07:21:57.513409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.823 [2024-07-15 07:21:57.517898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.823 [2024-07-15 07:21:57.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.823 [2024-07-15 07:21:57.517970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.823 [2024-07-15 07:21:57.522422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.823 [2024-07-15 07:21:57.522464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.823 [2024-07-15 07:21:57.522479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.823 [2024-07-15 07:21:57.526817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.823 [2024-07-15 07:21:57.526857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.823 [2024-07-15 07:21:57.526888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.823 [2024-07-15 07:21:57.531155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.531195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.531209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.535571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.535613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.535628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.539992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.540052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.540067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.544409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.544472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.544488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.548862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.548926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.548942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.553254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.553296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.553311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.557623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.557663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.557677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.562088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.562125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.562140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.566536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.566577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.566591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.570957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.570998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.571012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.575200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.575239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.575254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.579585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.579627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.579642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.584012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.584053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.584068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.588437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.588478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.588492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.592903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.592945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.592959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.598426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.598490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.598531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.603789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.603855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.603871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.608383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.608448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.608464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.613017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.613099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.613116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.617555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.617617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.617633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.622035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.622089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.626447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.626488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.626503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.630922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.630965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.630980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.635380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.635421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.635435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.639868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.639909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.639925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.644218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.644278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.644294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.648695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.648754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.648770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.653319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.653381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.824 [2024-07-15 07:21:57.653396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.824 [2024-07-15 07:21:57.657876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.824 [2024-07-15 07:21:57.657931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.657946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.662296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.662337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.662352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.666713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.666754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.666768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.671029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.671084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.671100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.675357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.675398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.675413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.679799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.679840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.684153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.684193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.684207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.688426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.688466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.688480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.692729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.692769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.692784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.697114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.697154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.697169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.701425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.701465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.701479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.705682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.705727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.705740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.709902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.709943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.709957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.714153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.714190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.714220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.718615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.718656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.718670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.723197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.723237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.723252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.727542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.727582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.727596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.731973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.732013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.732027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.736344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.736384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.736399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.740780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.740821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.740835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.745208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.745248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.745263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.749781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.749822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.754203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.754242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.754257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.758460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.758497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.758528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.762880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.762935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.767366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.767410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.767425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.825 [2024-07-15 07:21:57.771851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:48.825 [2024-07-15 07:21:57.771892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.825 [2024-07-15 07:21:57.771907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.776213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.776253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.085 [2024-07-15 07:21:57.776268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.780552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.780593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.085 [2024-07-15 07:21:57.780607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.784955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.784995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.085 [2024-07-15 07:21:57.785010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.789306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.789346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.085 [2024-07-15 07:21:57.789360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.793578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.793618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.085 [2024-07-15 07:21:57.793633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.798043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.798094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.085 [2024-07-15 07:21:57.798110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.802543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.802584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.085 [2024-07-15 07:21:57.802598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.085 [2024-07-15 07:21:57.806874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.085 [2024-07-15 07:21:57.806915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.806929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.811326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.811364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.811394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.815801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.815840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.815872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.820142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.820180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.820195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.824510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.824547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.824578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.828984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.829025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.829039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.833461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.833499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.833531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.837947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.837987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.838002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.842395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.842435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.842449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.846858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.846902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.846918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.851337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.851399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.851415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.855708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.855754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.855769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.860170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.860212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.860228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.864573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.864613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.864628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.869008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.869049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.869064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.873321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.873361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.873375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.877784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.877824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.877839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.882064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.882114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.882129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.886372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.886411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.886425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.890840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.890881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.890896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.895185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.895224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.895238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.899626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.899677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.899709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.904162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.904217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.904232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.908602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.086 [2024-07-15 07:21:57.908655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.086 [2024-07-15 07:21:57.908687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.086 [2024-07-15 07:21:57.913210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.913268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.913283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.917760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.917803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.917818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.922304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.922343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.922374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.926858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.926899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.926914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.931309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.931346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.931377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.935803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.935844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.935858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.940351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.940391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.940406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.944889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.944930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.944944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.949430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.949485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.949517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.953920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.953960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.953974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.958534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.958572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.958603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.962990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.963046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.963077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.087 [2024-07-15 07:21:57.967602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b46ac0) 00:17:49.087 [2024-07-15 07:21:57.967642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.087 [2024-07-15 07:21:57.967672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.087 00:17:49.087 Latency(us) 00:17:49.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.087 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:49.087 nvme0n1 : 2.00 6958.86 869.86 0.00 0.00 2295.69 2025.66 9770.82 00:17:49.087 =================================================================================================================== 00:17:49.087 Total : 6958.86 869.86 0.00 0.00 2295.69 2025.66 9770.82 00:17:49.087 0 00:17:49.087 07:21:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:49.087 07:21:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:49.087 07:21:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:49.087 | .driver_specific 00:17:49.087 | .nvme_error 00:17:49.087 | .status_code 00:17:49.087 | .command_transient_transport_error' 00:17:49.087 07:21:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 449 > 0 )) 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80415 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80415 ']' 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80415 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80415 00:17:49.346 killing process with pid 80415 00:17:49.346 Received shutdown signal, test time was about 2.000000 seconds 00:17:49.346 00:17:49.346 Latency(us) 00:17:49.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.346 =================================================================================================================== 00:17:49.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80415' 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80415 00:17:49.346 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80415 00:17:49.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80470 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80470 /var/tmp/bperf.sock 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80470 ']' 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:49.604 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.605 07:21:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 [2024-07-15 07:21:58.480043] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:49.605 [2024-07-15 07:21:58.480384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80470 ] 00:17:49.885 [2024-07-15 07:21:58.614617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.885 [2024-07-15 07:21:58.673736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.885 [2024-07-15 07:21:58.702953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.826 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:51.086 nvme0n1 00:17:51.086 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:51.086 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.086 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:51.086 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.086 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:51.086 07:21:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:51.344 Running I/O for 2 seconds... 00:17:51.344 [2024-07-15 07:22:00.131302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fef90 00:17:51.344 [2024-07-15 07:22:00.134227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.344 [2024-07-15 07:22:00.134276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:51.344 [2024-07-15 07:22:00.148055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190feb58 00:17:51.344 [2024-07-15 07:22:00.150672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.344 [2024-07-15 07:22:00.150714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:51.344 [2024-07-15 07:22:00.164771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fe2e8 00:17:51.344 [2024-07-15 07:22:00.167360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.344 [2024-07-15 07:22:00.167400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:51.344 [2024-07-15 07:22:00.181418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fda78 00:17:51.344 [2024-07-15 07:22:00.184021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.344 [2024-07-15 07:22:00.184061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:51.344 [2024-07-15 07:22:00.197863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fd208 00:17:51.345 [2024-07-15 07:22:00.200383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.345 [2024-07-15 07:22:00.200421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:51.345 [2024-07-15 07:22:00.214458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fc998 00:17:51.345 [2024-07-15 07:22:00.216950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.345 [2024-07-15 07:22:00.216988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:51.345 [2024-07-15 07:22:00.231038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fc128 00:17:51.345 [2024-07-15 07:22:00.233532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.345 [2024-07-15 07:22:00.233577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:51.345 [2024-07-15 07:22:00.247344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fb8b8 00:17:51.345 [2024-07-15 07:22:00.249803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.345 [2024-07-15 07:22:00.249841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:51.345 [2024-07-15 07:22:00.263916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fb048 00:17:51.345 [2024-07-15 07:22:00.266378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.345 [2024-07-15 07:22:00.266415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:51.345 [2024-07-15 07:22:00.280432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fa7d8 00:17:51.345 [2024-07-15 07:22:00.282857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.345 [2024-07-15 07:22:00.282894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:51.345 [2024-07-15 07:22:00.296992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f9f68 00:17:51.603 [2024-07-15 07:22:00.299577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.299623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.313856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f96f8 00:17:51.603 [2024-07-15 07:22:00.316243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.316286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.330272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f8e88 00:17:51.603 [2024-07-15 07:22:00.332619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.332660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.346683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f8618 00:17:51.603 [2024-07-15 07:22:00.349022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.349063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.363156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f7da8 00:17:51.603 [2024-07-15 07:22:00.365480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.365523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.380026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f7538 00:17:51.603 [2024-07-15 07:22:00.382368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.382411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.396720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f6cc8 00:17:51.603 [2024-07-15 07:22:00.399038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.399092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.413099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f6458 00:17:51.603 [2024-07-15 07:22:00.415362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.415400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.429517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f5be8 00:17:51.603 [2024-07-15 07:22:00.431833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.431873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.446519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f5378 00:17:51.603 [2024-07-15 07:22:00.448732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.448776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.462944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f4b08 00:17:51.603 [2024-07-15 07:22:00.465151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.465191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.479496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f4298 00:17:51.603 [2024-07-15 07:22:00.481694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.481748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.495956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f3a28 00:17:51.603 [2024-07-15 07:22:00.498151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.498190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.512390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f31b8 00:17:51.603 [2024-07-15 07:22:00.514540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.514579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.528810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f2948 00:17:51.603 [2024-07-15 07:22:00.531169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.531209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:51.603 [2024-07-15 07:22:00.545685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f20d8 00:17:51.603 [2024-07-15 07:22:00.547804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.603 [2024-07-15 07:22:00.547846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.562644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f1868 00:17:51.861 [2024-07-15 07:22:00.564738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.564781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.578975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f0ff8 00:17:51.861 [2024-07-15 07:22:00.581039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.581087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.595449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f0788 00:17:51.861 [2024-07-15 07:22:00.597522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.597572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.611843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eff18 00:17:51.861 [2024-07-15 07:22:00.613859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.613897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.628130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ef6a8 00:17:51.861 [2024-07-15 07:22:00.630130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.630166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.644404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eee38 00:17:51.861 [2024-07-15 07:22:00.646372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.646409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.660652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ee5c8 00:17:51.861 [2024-07-15 07:22:00.662599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.662635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.676918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190edd58 00:17:51.861 [2024-07-15 07:22:00.678863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.678900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.693212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ed4e8 00:17:51.861 [2024-07-15 07:22:00.695122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.695160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.709718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ecc78 00:17:51.861 [2024-07-15 07:22:00.711614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.711665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.726579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ec408 00:17:51.861 [2024-07-15 07:22:00.728633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.728672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.743524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ebb98 00:17:51.861 [2024-07-15 07:22:00.745393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.745431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.760379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eb328 00:17:51.861 [2024-07-15 07:22:00.762253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.762296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.777260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eaab8 00:17:51.861 [2024-07-15 07:22:00.779094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.779134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.793676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ea248 00:17:51.861 [2024-07-15 07:22:00.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.795509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:51.861 [2024-07-15 07:22:00.810121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e99d8 00:17:51.861 [2024-07-15 07:22:00.811997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.861 [2024-07-15 07:22:00.812038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.826738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e9168 00:17:52.119 [2024-07-15 07:22:00.828494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.828530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.843083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e88f8 00:17:52.119 [2024-07-15 07:22:00.844813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.844852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.859480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e8088 00:17:52.119 [2024-07-15 07:22:00.861214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.861251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.875969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e7818 00:17:52.119 [2024-07-15 07:22:00.877677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.877713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.892367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e6fa8 00:17:52.119 [2024-07-15 07:22:00.894021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.894058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.908914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e6738 00:17:52.119 [2024-07-15 07:22:00.910568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.910607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.925333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e5ec8 00:17:52.119 [2024-07-15 07:22:00.926956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.926993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.941722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e5658 00:17:52.119 [2024-07-15 07:22:00.943332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.943368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.958222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e4de8 00:17:52.119 [2024-07-15 07:22:00.959823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.959859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.974596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e4578 00:17:52.119 [2024-07-15 07:22:00.976217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.976261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:00.991060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e3d08 00:17:52.119 [2024-07-15 07:22:00.992602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:00.992641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:52.119 [2024-07-15 07:22:01.007359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e3498 00:17:52.119 [2024-07-15 07:22:01.008868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.119 [2024-07-15 07:22:01.008906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:52.120 [2024-07-15 07:22:01.023620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e2c28 00:17:52.120 [2024-07-15 07:22:01.025120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.120 [2024-07-15 07:22:01.025157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:52.120 [2024-07-15 07:22:01.039909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e23b8 00:17:52.120 [2024-07-15 07:22:01.041412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.120 [2024-07-15 07:22:01.041449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:52.120 [2024-07-15 07:22:01.056347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e1b48 00:17:52.120 [2024-07-15 07:22:01.057847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.120 [2024-07-15 07:22:01.057890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.072925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e12d8 00:17:52.378 [2024-07-15 07:22:01.074446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.074484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.089285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e0a68 00:17:52.378 [2024-07-15 07:22:01.090705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.090743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.105554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e01f8 00:17:52.378 [2024-07-15 07:22:01.106948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.106986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.121906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190df988 00:17:52.378 [2024-07-15 07:22:01.123309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.123359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.138533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190df118 00:17:52.378 [2024-07-15 07:22:01.139921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.139961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.155281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190de8a8 00:17:52.378 [2024-07-15 07:22:01.156665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.156708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.171653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190de038 00:17:52.378 [2024-07-15 07:22:01.172978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.173015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.196003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190de038 00:17:52.378 [2024-07-15 07:22:01.198638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.198690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.212726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190de8a8 00:17:52.378 [2024-07-15 07:22:01.215503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.215555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.229740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190df118 00:17:52.378 [2024-07-15 07:22:01.232301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.232344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.246068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190df988 00:17:52.378 [2024-07-15 07:22:01.248585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.248623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.262370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e01f8 00:17:52.378 [2024-07-15 07:22:01.264862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.264900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.278707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e0a68 00:17:52.378 [2024-07-15 07:22:01.281192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.281229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.294998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e12d8 00:17:52.378 [2024-07-15 07:22:01.297467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.297503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.311338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e1b48 00:17:52.378 [2024-07-15 07:22:01.313777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.313816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:52.378 [2024-07-15 07:22:01.327759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e23b8 00:17:52.378 [2024-07-15 07:22:01.330291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.378 [2024-07-15 07:22:01.330336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.344891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e2c28 00:17:52.636 [2024-07-15 07:22:01.347332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.347374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.361322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e3498 00:17:52.636 [2024-07-15 07:22:01.363699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.363738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.377626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e3d08 00:17:52.636 [2024-07-15 07:22:01.379996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.380033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.394004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e4578 00:17:52.636 [2024-07-15 07:22:01.396367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.396403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.411327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e4de8 00:17:52.636 [2024-07-15 07:22:01.413688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.413727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.427811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e5658 00:17:52.636 [2024-07-15 07:22:01.430168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.430207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.444320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e5ec8 00:17:52.636 [2024-07-15 07:22:01.446608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.446648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.460716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e6738 00:17:52.636 [2024-07-15 07:22:01.462991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.463032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.477464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e6fa8 00:17:52.636 [2024-07-15 07:22:01.479759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.636 [2024-07-15 07:22:01.479801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:52.636 [2024-07-15 07:22:01.493906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e7818 00:17:52.637 [2024-07-15 07:22:01.496132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.637 [2024-07-15 07:22:01.496170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:52.637 [2024-07-15 07:22:01.510223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e8088 00:17:52.637 [2024-07-15 07:22:01.512443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.637 [2024-07-15 07:22:01.512482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:52.637 [2024-07-15 07:22:01.526711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e88f8 00:17:52.637 [2024-07-15 07:22:01.529012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.637 [2024-07-15 07:22:01.529050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:52.637 [2024-07-15 07:22:01.543271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e9168 00:17:52.637 [2024-07-15 07:22:01.545534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.637 [2024-07-15 07:22:01.545578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:52.637 [2024-07-15 07:22:01.560776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190e99d8 00:17:52.637 [2024-07-15 07:22:01.562972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.637 [2024-07-15 07:22:01.563011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:52.637 [2024-07-15 07:22:01.577293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ea248 00:17:52.637 [2024-07-15 07:22:01.579430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.637 [2024-07-15 07:22:01.579469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:52.894 [2024-07-15 07:22:01.593880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eaab8 00:17:52.894 [2024-07-15 07:22:01.595998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.596035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.610259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eb328 00:17:52.895 [2024-07-15 07:22:01.612337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.612374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.626604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ebb98 00:17:52.895 [2024-07-15 07:22:01.628674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.628711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.642996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ec408 00:17:52.895 [2024-07-15 07:22:01.645044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.645090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.659360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ecc78 00:17:52.895 [2024-07-15 07:22:01.661384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.661422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.675669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ed4e8 00:17:52.895 [2024-07-15 07:22:01.677670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.677706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.691993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190edd58 00:17:52.895 [2024-07-15 07:22:01.694019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.694055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.708324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ee5c8 00:17:52.895 [2024-07-15 07:22:01.710284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.710321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.724656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eee38 00:17:52.895 [2024-07-15 07:22:01.726597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.726633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.741206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190ef6a8 00:17:52.895 [2024-07-15 07:22:01.743139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.743177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.757558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190eff18 00:17:52.895 [2024-07-15 07:22:01.759455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.759492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.773895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f0788 00:17:52.895 [2024-07-15 07:22:01.775775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.775810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.790257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f0ff8 00:17:52.895 [2024-07-15 07:22:01.792110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.792145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.806542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f1868 00:17:52.895 [2024-07-15 07:22:01.808372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.808408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.822827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f20d8 00:17:52.895 [2024-07-15 07:22:01.824644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.824679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:52.895 [2024-07-15 07:22:01.839111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f2948 00:17:52.895 [2024-07-15 07:22:01.840891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.895 [2024-07-15 07:22:01.840927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.855648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f31b8 00:17:53.155 [2024-07-15 07:22:01.857433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.857468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.871941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f3a28 00:17:53.155 [2024-07-15 07:22:01.873714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.873749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.888234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f4298 00:17:53.155 [2024-07-15 07:22:01.889965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.890001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.904502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f4b08 00:17:53.155 [2024-07-15 07:22:01.906234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.906269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.920827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f5378 00:17:53.155 [2024-07-15 07:22:01.922540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.922576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.937120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f5be8 00:17:53.155 [2024-07-15 07:22:01.938792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.938829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.953459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f6458 00:17:53.155 [2024-07-15 07:22:01.955120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.955157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.969973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f6cc8 00:17:53.155 [2024-07-15 07:22:01.971612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.971649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:01.986306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f7538 00:17:53.155 [2024-07-15 07:22:01.987906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:01.987942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:02.002600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f7da8 00:17:53.155 [2024-07-15 07:22:02.004192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:02.004227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:02.018899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f8618 00:17:53.155 [2024-07-15 07:22:02.020478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:02.020513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:02.035182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f8e88 00:17:53.155 [2024-07-15 07:22:02.036794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:02.036829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:02.051682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f96f8 00:17:53.155 [2024-07-15 07:22:02.053216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:02.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:02.067946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190f9f68 00:17:53.155 [2024-07-15 07:22:02.069478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:02.069512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:02.084252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fa7d8 00:17:53.155 [2024-07-15 07:22:02.085749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:02.085785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:53.155 [2024-07-15 07:22:02.100519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be360) with pdu=0x2000190fb048 00:17:53.155 [2024-07-15 07:22:02.101998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.155 [2024-07-15 07:22:02.102034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:53.414 00:17:53.414 Latency(us) 00:17:53.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.414 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.414 nvme0n1 : 2.01 15320.28 59.84 0.00 0.00 8347.26 7268.54 31933.91 00:17:53.414 =================================================================================================================== 00:17:53.414 Total : 15320.28 59.84 0.00 0.00 8347.26 7268.54 31933.91 00:17:53.414 0 00:17:53.414 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:53.414 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:53.414 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:53.414 | .driver_specific 00:17:53.414 | .nvme_error 00:17:53.414 | .status_code 00:17:53.414 | .command_transient_transport_error' 00:17:53.414 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80470 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80470 ']' 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80470 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80470 00:17:53.672 killing process with pid 80470 00:17:53.672 Received shutdown signal, test time was about 2.000000 seconds 00:17:53.672 00:17:53.672 Latency(us) 00:17:53.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.672 =================================================================================================================== 00:17:53.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80470' 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80470 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80470 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80529 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80529 /var/tmp/bperf.sock 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80529 ']' 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:53.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.672 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.672 [2024-07-15 07:22:02.619987] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:53.672 [2024-07-15 07:22:02.620325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.672 Zero copy mechanism will not be used. 00:17:53.672 llocations --file-prefix=spdk_pid80529 ] 00:17:53.930 [2024-07-15 07:22:02.756442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.930 [2024-07-15 07:22:02.826279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.930 [2024-07-15 07:22:02.859806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:54.188 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.188 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:54.188 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:54.188 07:22:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:54.445 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:54.445 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.445 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.445 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.445 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.445 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.702 nvme0n1 00:17:54.702 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:54.702 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.702 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:54.702 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.702 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:54.702 07:22:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:54.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:54.962 Zero copy mechanism will not be used. 00:17:54.962 Running I/O for 2 seconds... 00:17:54.962 [2024-07-15 07:22:03.746093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.746434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.746467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.752125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.752458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.752489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.758199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.758518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.758549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.764213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.764535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.764566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.770271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.770591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.770622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.776256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.776574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.776604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.782301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.782620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.782650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.788249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.788566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.788597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.794331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.794650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.794680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.800368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.800687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.800718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.806410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.806739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.806770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.812457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.812777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.812808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.818602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.818923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.818957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.824608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.824929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.824961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.830693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.831019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.831052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.836699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.837018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.837049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.842827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.843173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.843204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.848847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.849183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.849215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.854883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.855215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.855250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.860901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.861233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.861268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.866935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.867268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.867303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.873007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.873346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.873382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.879024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.879369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.879409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.885047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.885386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.885421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.891246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.891573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.962 [2024-07-15 07:22:03.891603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.962 [2024-07-15 07:22:03.897605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.962 [2024-07-15 07:22:03.897935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.963 [2024-07-15 07:22:03.897965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.963 [2024-07-15 07:22:03.903650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.963 [2024-07-15 07:22:03.903969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.963 [2024-07-15 07:22:03.904000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.963 [2024-07-15 07:22:03.909712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:54.963 [2024-07-15 07:22:03.910037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.963 [2024-07-15 07:22:03.910067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.915998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.916345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.916375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.922191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.922514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.922544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.928224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.928544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.928576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.933829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.934174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.934205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.939166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.939482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.939511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.944400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.944706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.944737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.949650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.949971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.950000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.954927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.955251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.955284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.960156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.960463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.960494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.965398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.965719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.965748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.970617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.970932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.970962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.976010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.976338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.976385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.981309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.981630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.981659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.986563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.986871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.986901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.991777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.992101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.992132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:03.997009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:03.997334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:03.997367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:04.002318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:04.002630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:04.002660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:04.007528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:04.007834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:04.007864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:04.012769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:04.013089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.222 [2024-07-15 07:22:04.013118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.222 [2024-07-15 07:22:04.018027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.222 [2024-07-15 07:22:04.018354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.018389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.023281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.023587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.023616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.028530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.028842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.028873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.033784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.034104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.034135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.039006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.039329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.039363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.044216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.044521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.044551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.049436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.049753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.049782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.054683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.054993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.055023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.059922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.060242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.060276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.065088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.065392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.065426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.070316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.070622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.070651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.075536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.075854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.075884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.080728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.081040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.081082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.085944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.086271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.086300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.091268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.091600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.091630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.096700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.097035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.097065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.102023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.102354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.102395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.107324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.107633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.107663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.112573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.112880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.112910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.117797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.118118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.118147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.123003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.123322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.123358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.128210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.128518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.128547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.133464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.133785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.133816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.138695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.139002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.139033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.144063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.144387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.144418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.149386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.149705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.149735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.154945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.155272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.155307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.160230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.160537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.160567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.223 [2024-07-15 07:22:04.165504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.223 [2024-07-15 07:22:04.165822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.223 [2024-07-15 07:22:04.165852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.224 [2024-07-15 07:22:04.170744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.224 [2024-07-15 07:22:04.171053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.224 [2024-07-15 07:22:04.171098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.176326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.176712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.176774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.181558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.181633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.181660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.186811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.186888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.186915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.191987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.192059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.192101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.197233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.197310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.197335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.202492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.202572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.202597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.207678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.207750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.207774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.212862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.212937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.212962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.218061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.218144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.218169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.484 [2024-07-15 07:22:04.223263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.484 [2024-07-15 07:22:04.223335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.484 [2024-07-15 07:22:04.223359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.228454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.228529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.228553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.233731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.233808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.233832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.238933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.239010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.239034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.244169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.244240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.244265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.249284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.249356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.249380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.254562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.254635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.254660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.259837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.259913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.259937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.265172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.265274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.265298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.270425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.270498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.270522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.275603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.275675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.275699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.280919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.280993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.281018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.286152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.286228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.286253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.291328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.291403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.291426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.296496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.296567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.296591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.301669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.301742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.301767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.306946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.307019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.307043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.312103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.312174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.312199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.317292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.317368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.317392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.322488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.322583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.327690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.327761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.327785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.332875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.332946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.332970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.338146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.338222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.338247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.343390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.343462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.343487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.348568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.348640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.348664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.353919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.353993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.354017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.359131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.359203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.359227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.364357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.364433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.364458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.369477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.369562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.369586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.485 [2024-07-15 07:22:04.374685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.485 [2024-07-15 07:22:04.374757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.485 [2024-07-15 07:22:04.374780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.379875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.379950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.379974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.385094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.385166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.385190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.390290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.390363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.390386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.395480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.395553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.395577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.400623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.400695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.400719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.405851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.405940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.405964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.411066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.411150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.411174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.416250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.416323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.416346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.421464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.421535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.421570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.426660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.426732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.426755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.486 [2024-07-15 07:22:04.431816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.486 [2024-07-15 07:22:04.431903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.486 [2024-07-15 07:22:04.431927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.437271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.437347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.437372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.442527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.442602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.442627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.447764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.447840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.447864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.452972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.453045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.453069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.458297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.458399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.458422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.463698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.463776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.463800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.468869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.468946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.468969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.474108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.474184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.474208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.479297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.479373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.479397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.484524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.484600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.484624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.489727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.489799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.489823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.494871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.494948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.494972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.500032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.500124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.500148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.505210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.505306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.510455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.510578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.515643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.515717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.515743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.522013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.522115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.522140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.527636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.527727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.527754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.532870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.532957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.532982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.538183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.538271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.538296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.543442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.543529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.543554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.548593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.548670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.548695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.553822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.553894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.553918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.559248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.559323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.559347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.564478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.564564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.564588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.569681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.569761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.569785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.574878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.574959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.574983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.580093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.580179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.580202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.585269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.585364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.585388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.590492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.590570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.590594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.595685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.595761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.595785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.600850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.600943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.746 [2024-07-15 07:22:04.600967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.746 [2024-07-15 07:22:04.606045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.746 [2024-07-15 07:22:04.606135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.606160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.611234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.611308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.611333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.617640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.617722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.617747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.623033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.623143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.623168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.628377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.628448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.628472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.633736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.633812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.633836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.639093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.639167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.639191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.644398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.644485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.644510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.649648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.649730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.649755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.654907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.654982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.655005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.660154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.660235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.660259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.665391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.665474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.665498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.670593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.670666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.670691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.675782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.675870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.675894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.681007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.681111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.681136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.686202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.686275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.686299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.691351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.691421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.691446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.747 [2024-07-15 07:22:04.696543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:55.747 [2024-07-15 07:22:04.696615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.747 [2024-07-15 07:22:04.696639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.701722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.701794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.701818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.706932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.707011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.707035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.712175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.712264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.712288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.717403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.717478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.717502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.723960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.724090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.724114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.729516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.729611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.729636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.734937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.735018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.735043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.740233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.740324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.740349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.745503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.745602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.745626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.750757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.750848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.750874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.756010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.756100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.756126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.761214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.761294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.761318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.766531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.766608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.766632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.771741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.771826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.771850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.776914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.776988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.777012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.782105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.782185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.782208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.787218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.787289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.787313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.792440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.792539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.792570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.797755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.797843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.797871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.802972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.803048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.803089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.808714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.808793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.808817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.814112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.814186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.814211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.819341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.819431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.819455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.007 [2024-07-15 07:22:04.824609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.007 [2024-07-15 07:22:04.824686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.007 [2024-07-15 07:22:04.824710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.829964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.830045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.830069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.835995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.836099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.836136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.841488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.841608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.841638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.847000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.847123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.847157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.852384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.852485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.852519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.857823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.857911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.857937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.863204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.863294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.863318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.868513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.868600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.868624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.873693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.873783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.873807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.878862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.878946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.878971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.884047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.884135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.884160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.889191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.889275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.889299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.894362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.894442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.894466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.899568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.899640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.899664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.904726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.904796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.904820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.909868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.909953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.909977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.915026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.915122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.915147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.920203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.920279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.920304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.925404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.925475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.925501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.930571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.930640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.930664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.935770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.935842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.935866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.940943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.941014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.941038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.946191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.946268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.946295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.951477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.951576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.951609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.008 [2024-07-15 07:22:04.956753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.008 [2024-07-15 07:22:04.956850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.008 [2024-07-15 07:22:04.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.268 [2024-07-15 07:22:04.962008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.268 [2024-07-15 07:22:04.962124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.268 [2024-07-15 07:22:04.962160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.268 [2024-07-15 07:22:04.967262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.268 [2024-07-15 07:22:04.967351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.268 [2024-07-15 07:22:04.967376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.268 [2024-07-15 07:22:04.972485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.268 [2024-07-15 07:22:04.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.268 [2024-07-15 07:22:04.972605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.268 [2024-07-15 07:22:04.977810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.268 [2024-07-15 07:22:04.977897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.268 [2024-07-15 07:22:04.977922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.268 [2024-07-15 07:22:04.983020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.268 [2024-07-15 07:22:04.983108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.268 [2024-07-15 07:22:04.983132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.268 [2024-07-15 07:22:04.988229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.268 [2024-07-15 07:22:04.988310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.268 [2024-07-15 07:22:04.988334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.268 [2024-07-15 07:22:04.993390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.268 [2024-07-15 07:22:04.993473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.268 [2024-07-15 07:22:04.993499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:04.998506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:04.998613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:04.998651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.003596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.003675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.003706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.008851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.008925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.008952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.014049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.014166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.014192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.019276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.019359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.019384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.024507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.024578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.024603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.029704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.029783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.029808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.034915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.034991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.035016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.040121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.040194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.045381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.045480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.045505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.050676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.050751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.050776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.055899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.055972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.061087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.061177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.061202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.066296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.066369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.066395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.071514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.071601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.071625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.076644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.076738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.076762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.081887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.081959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.081982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.087048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.087134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.087159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.092220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.092290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.092314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.097404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.097480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.097505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.102615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.102700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.102724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.107796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.107868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.107892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.112917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.112989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.113013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.118187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.118260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.118285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.123434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.123528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.123553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.128576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.128649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.128674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.133688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.133762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.133786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.138886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.138972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.138996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.269 [2024-07-15 07:22:05.144028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.269 [2024-07-15 07:22:05.144119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.269 [2024-07-15 07:22:05.144144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.149248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.149319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.149343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.154489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.154563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.154587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.159736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.159824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.159859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.164856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.164946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.164970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.170095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.170166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.170191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.175316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.175410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.175433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.180570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.180643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.180667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.185719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.185791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.185815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.190932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.191029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.191058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.196286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.196400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.196433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.201382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.201461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.201487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.206625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.206719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.206744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.211794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.211887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.211911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.270 [2024-07-15 07:22:05.216959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.270 [2024-07-15 07:22:05.217059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.270 [2024-07-15 07:22:05.217098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.222174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.222260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.222284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.227447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.227518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.227542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.232577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.232652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.232676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.237780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.237856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.237879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.242950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.243038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.243061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.248176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.248248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.248273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.253353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.253429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.253454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.258624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.258718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.258742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.263796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.263883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.263907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.269026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.538 [2024-07-15 07:22:05.269139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.538 [2024-07-15 07:22:05.269163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.538 [2024-07-15 07:22:05.274323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.274407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.274432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.279564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.279650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.279674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.284774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.284860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.284884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.290010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.290141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.290173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.295154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.295233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.295257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.300316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.300391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.300415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.305485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.305571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.305596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.310695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.310771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.310795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.315913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.316001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.316025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.321017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.321108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.321132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.326229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.326312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.326336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.331415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.331493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.331516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.336642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.336713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.336736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.341836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.341932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.341956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.346958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.347038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.347062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.352157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.352250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.352274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.357278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.357355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.357379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.362461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.362530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.362555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.367663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.367749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.539 [2024-07-15 07:22:05.367776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.539 [2024-07-15 07:22:05.372865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.539 [2024-07-15 07:22:05.372945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.372971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.378152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.378228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.378254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.383299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.383385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.388462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.388541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.388565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.393659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.393730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.393755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.398779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.398847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.398871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.403910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.403994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.404018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.409120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.409223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.414306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.414379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.414402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.419446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.419532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.419556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.424649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.424743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.424766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.429796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.429890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.429913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.434927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.435013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.435036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.440103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.440183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.440207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.445276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.445360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.445385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.450531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.450636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.450665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.455663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.455736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.460830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.460924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.460948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.466039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.466139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.540 [2024-07-15 07:22:05.466163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.540 [2024-07-15 07:22:05.471249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.540 [2024-07-15 07:22:05.471342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.541 [2024-07-15 07:22:05.471365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.541 [2024-07-15 07:22:05.476348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.541 [2024-07-15 07:22:05.476430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.541 [2024-07-15 07:22:05.476454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.541 [2024-07-15 07:22:05.481574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.541 [2024-07-15 07:22:05.481649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.541 [2024-07-15 07:22:05.481674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.541 [2024-07-15 07:22:05.486775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.541 [2024-07-15 07:22:05.486851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.541 [2024-07-15 07:22:05.486875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.491999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.492111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.492142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.497224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.497318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.497342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.502390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.502473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.507612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.507684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.507708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.512752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.512825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.512849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.517945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.518041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.518065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.523089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.523165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.523188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.528441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.528523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.528549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.533636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.533734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.533760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.538876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.538985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.539020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.544110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.544191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.544218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.549297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.549376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.549402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.554495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.554569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.554595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.559678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.559752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.559778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.564907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.564983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.565009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.570132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.570207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.570232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.575248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.575334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.575359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.580447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.580523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.580548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.585656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.585735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.585759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.590847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.590925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.590950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.595976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.596064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.596104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.601161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.601251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.601275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.606335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.606415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.606440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.611579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.611651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.611675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.616758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.616831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.801 [2024-07-15 07:22:05.616856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.801 [2024-07-15 07:22:05.621927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.801 [2024-07-15 07:22:05.622024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.622048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.628469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.628552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.628576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.634236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.634313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.634337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.639670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.639761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.639786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.645033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.645130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.645155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.650605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.650711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.650736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.655968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.656056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.656106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.661298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.661396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.661420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.666661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.666778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.672649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.672721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.672746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.677942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.678027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.678052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.683277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.683362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.683386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.688500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.688576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.688600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.693749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.693832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.693857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.698960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.699046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.699084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.704224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.704304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.704328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.709405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.709480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.709504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.714654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.714744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.714768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.719961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.720049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.720088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.725293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.725378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.725402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.730560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.730651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.730676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.735780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.735858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.735882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.802 [2024-07-15 07:22:05.740763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16be6a0) with pdu=0x2000190fef90 00:17:56.802 [2024-07-15 07:22:05.740839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.802 [2024-07-15 07:22:05.740864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.802 00:17:56.802 Latency(us) 00:17:56.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.802 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:56.802 nvme0n1 : 2.00 5823.34 727.92 0.00 0.00 2740.86 1534.14 6553.60 00:17:56.802 =================================================================================================================== 00:17:56.802 Total : 5823.34 727.92 0.00 0.00 2740.86 1534.14 6553.60 00:17:56.802 0 00:17:57.060 07:22:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:57.060 07:22:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:57.060 07:22:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:57.060 | .driver_specific 00:17:57.060 | .nvme_error 00:17:57.060 | .status_code 00:17:57.060 | .command_transient_transport_error' 00:17:57.060 07:22:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 376 > 0 )) 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80529 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80529 ']' 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80529 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80529 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:57.318 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:57.318 killing process with pid 80529 00:17:57.318 Received shutdown signal, test time was about 2.000000 seconds 00:17:57.318 00:17:57.318 Latency(us) 00:17:57.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.318 =================================================================================================================== 00:17:57.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80529' 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80529 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80529 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80330 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80330 ']' 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80330 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80330 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:57.319 killing process with pid 80330 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80330' 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80330 00:17:57.319 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80330 00:17:57.592 00:17:57.592 real 0m17.068s 00:17:57.592 user 0m33.189s 00:17:57.592 sys 0m4.452s 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.592 ************************************ 00:17:57.592 END TEST nvmf_digest_error 00:17:57.592 ************************************ 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.592 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.592 rmmod nvme_tcp 00:17:57.592 rmmod nvme_fabrics 00:17:57.884 rmmod nvme_keyring 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80330 ']' 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80330 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80330 ']' 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80330 00:17:57.884 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80330) - No such process 00:17:57.884 Process with pid 80330 is not found 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80330 is not found' 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:57.884 00:17:57.884 real 0m35.200s 00:17:57.884 user 1m7.513s 00:17:57.884 sys 0m9.258s 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.884 07:22:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:57.884 ************************************ 00:17:57.884 END TEST nvmf_digest 00:17:57.884 ************************************ 00:17:57.884 07:22:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:57.884 07:22:06 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:17:57.884 07:22:06 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:17:57.885 07:22:06 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:57.885 07:22:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:57.885 07:22:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.885 07:22:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.885 ************************************ 00:17:57.885 START TEST nvmf_host_multipath 00:17:57.885 ************************************ 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:57.885 * Looking for test storage... 00:17:57.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:57.885 Cannot find device "nvmf_tgt_br" 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.885 Cannot find device "nvmf_tgt_br2" 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:57.885 Cannot find device "nvmf_tgt_br" 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:57.885 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:58.143 Cannot find device "nvmf_tgt_br2" 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:58.144 07:22:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.144 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.402 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:58.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:17:58.402 00:17:58.402 --- 10.0.0.2 ping statistics --- 00:17:58.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.402 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:58.402 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:58.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:58.402 00:17:58.402 --- 10.0.0.3 ping statistics --- 00:17:58.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.403 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:58.403 00:17:58.403 --- 10.0.0.1 ping statistics --- 00:17:58.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.403 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80777 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80777 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80777 ']' 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.403 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:58.403 [2024-07-15 07:22:07.185117] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:17:58.403 [2024-07-15 07:22:07.185202] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.403 [2024-07-15 07:22:07.321928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:58.661 [2024-07-15 07:22:07.391535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.661 [2024-07-15 07:22:07.391590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.661 [2024-07-15 07:22:07.391604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.661 [2024-07-15 07:22:07.391614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.661 [2024-07-15 07:22:07.391623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.661 [2024-07-15 07:22:07.391809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.661 [2024-07-15 07:22:07.391821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.661 [2024-07-15 07:22:07.425137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80777 00:17:58.661 07:22:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.919 [2024-07-15 07:22:07.774041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.919 07:22:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:59.177 Malloc0 00:17:59.177 07:22:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:59.435 07:22:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.002 07:22:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.002 [2024-07-15 07:22:08.877494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.002 07:22:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:00.260 [2024-07-15 07:22:09.113632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80825 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80825 /var/tmp/bdevperf.sock 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80825 ']' 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.260 07:22:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:01.197 07:22:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.197 07:22:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:01.197 07:22:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:01.455 07:22:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:01.714 Nvme0n1 00:18:01.714 07:22:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:02.281 Nvme0n1 00:18:02.281 07:22:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:02.281 07:22:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.222 07:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:03.222 07:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:03.479 07:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:03.737 07:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:03.737 07:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80876 00:18:03.737 07:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:03.737 07:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80777 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.290 Attaching 4 probes... 00:18:10.290 @path[10.0.0.2, 4421]: 16912 00:18:10.290 @path[10.0.0.2, 4421]: 17649 00:18:10.290 @path[10.0.0.2, 4421]: 17542 00:18:10.290 @path[10.0.0.2, 4421]: 17430 00:18:10.290 @path[10.0.0.2, 4421]: 16923 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80876 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:10.290 07:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:10.290 07:22:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:10.856 07:22:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:10.856 07:22:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80992 00:18:10.856 07:22:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:10.856 07:22:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80777 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:17.415 Attaching 4 probes... 00:18:17.415 @path[10.0.0.2, 4420]: 17086 00:18:17.415 @path[10.0.0.2, 4420]: 17142 00:18:17.415 @path[10.0.0.2, 4420]: 17146 00:18:17.415 @path[10.0.0.2, 4420]: 16326 00:18:17.415 @path[10.0.0.2, 4420]: 17183 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80992 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:17.415 07:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:17.415 07:22:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:17.415 07:22:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:17.415 07:22:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81101 00:18:17.415 07:22:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80777 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:17.415 07:22:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:24.047 Attaching 4 probes... 00:18:24.047 @path[10.0.0.2, 4421]: 14100 00:18:24.047 @path[10.0.0.2, 4421]: 17300 00:18:24.047 @path[10.0.0.2, 4421]: 15786 00:18:24.047 @path[10.0.0.2, 4421]: 17264 00:18:24.047 @path[10.0.0.2, 4421]: 17377 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81101 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:24.047 07:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:24.305 07:22:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:24.305 07:22:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81219 00:18:24.305 07:22:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:24.305 07:22:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80777 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.859 Attaching 4 probes... 00:18:30.859 00:18:30.859 00:18:30.859 00:18:30.859 00:18:30.859 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81219 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:30.859 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:31.117 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:31.117 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80777 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:31.117 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81332 00:18:31.117 07:22:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:37.701 07:22:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:37.701 07:22:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.701 Attaching 4 probes... 00:18:37.701 @path[10.0.0.2, 4421]: 16710 00:18:37.701 @path[10.0.0.2, 4421]: 17077 00:18:37.701 @path[10.0.0.2, 4421]: 16975 00:18:37.701 @path[10.0.0.2, 4421]: 16909 00:18:37.701 @path[10.0.0.2, 4421]: 17048 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81332 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:37.701 07:22:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:38.636 07:22:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:38.636 07:22:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81454 00:18:38.636 07:22:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80777 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:38.636 07:22:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.283 Attaching 4 probes... 00:18:45.283 @path[10.0.0.2, 4420]: 16655 00:18:45.283 @path[10.0.0.2, 4420]: 16969 00:18:45.283 @path[10.0.0.2, 4420]: 16858 00:18:45.283 @path[10.0.0.2, 4420]: 17065 00:18:45.283 @path[10.0.0.2, 4420]: 16958 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81454 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.283 07:22:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:45.283 [2024-07-15 07:22:54.062203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:45.283 07:22:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:45.541 07:22:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:52.105 07:23:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:52.105 07:23:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81630 00:18:52.105 07:23:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80777 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:52.105 07:23:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.672 Attaching 4 probes... 00:18:58.672 @path[10.0.0.2, 4421]: 16563 00:18:58.672 @path[10.0.0.2, 4421]: 16799 00:18:58.672 @path[10.0.0.2, 4421]: 15763 00:18:58.672 @path[10.0.0.2, 4421]: 16338 00:18:58.672 @path[10.0.0.2, 4421]: 16476 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81630 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80825 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80825 ']' 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80825 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80825 00:18:58.672 killing process with pid 80825 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80825' 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80825 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80825 00:18:58.672 Connection closed with partial response: 00:18:58.672 00:18:58.672 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80825 00:18:58.672 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:58.672 [2024-07-15 07:22:09.182969] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:18:58.672 [2024-07-15 07:22:09.183120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80825 ] 00:18:58.672 [2024-07-15 07:22:09.319282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.672 [2024-07-15 07:22:09.391240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.672 [2024-07-15 07:22:09.425365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:58.672 Running I/O for 90 seconds... 00:18:58.672 [2024-07-15 07:22:19.504321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.504967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.504989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.505005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.505027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.672 [2024-07-15 07:22:19.505042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.505064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.672 [2024-07-15 07:22:19.505097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.505122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.672 [2024-07-15 07:22:19.505139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.505162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.672 [2024-07-15 07:22:19.505178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:58.672 [2024-07-15 07:22:19.505200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.672 [2024-07-15 07:22:19.505215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.505970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.505985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.506368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.506979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.673 [2024-07-15 07:22:19.506994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:58.673 [2024-07-15 07:22:19.507016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.673 [2024-07-15 07:22:19.507032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.507946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.507973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.507990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.674 [2024-07-15 07:22:19.508745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.508793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.674 [2024-07-15 07:22:19.508831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:58.674 [2024-07-15 07:22:19.508853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:19.508868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.508890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:19.508906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.508928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:19.508943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.508965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:19.508985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.509008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:19.509024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:19.510535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.510975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.510990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.511012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.511028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:19.511050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:19.511066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.044980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.045562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.045965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.045980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.046003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.046018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.046040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.675 [2024-07-15 07:22:26.046055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.046090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.046109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.046133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.046148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.046171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.675 [2024-07-15 07:22:26.046187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:58.675 [2024-07-15 07:22:26.046209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.676 [2024-07-15 07:22:26.046687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.046871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.046930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.046958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.046974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.676 [2024-07-15 07:22:26.047817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:58.676 [2024-07-15 07:22:26.047843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.047858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.047883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.047899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.047925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.047941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.048062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.048122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.048163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.048203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.048244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.048285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.048960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.048976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.677 [2024-07-15 07:22:26.049696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.049743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.049784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.049825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.049866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:58.677 [2024-07-15 07:22:26.049891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.677 [2024-07-15 07:22:26.049907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.049931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:26.049947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.049972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:26.049988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:26.050028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:26.050701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:26.050717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.157693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.157786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.157852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.157875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.157900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.157917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.157939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.157955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.157977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.157993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.678 [2024-07-15 07:22:33.158643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:58.678 [2024-07-15 07:22:33.158834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.678 [2024-07-15 07:22:33.158849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.158871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.158887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.158908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.158923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.158946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.158962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.158985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.158999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.159413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.159702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.159718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.679 [2024-07-15 07:22:33.160863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.160904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.160948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.160974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.160990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.161016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.161032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:58.679 [2024-07-15 07:22:33.161059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.679 [2024-07-15 07:22:33.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.161943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.161974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.161991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.162033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.162089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.162134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.162176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.162218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.162259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.680 [2024-07-15 07:22:33.162301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.680 [2024-07-15 07:22:33.162732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:58.680 [2024-07-15 07:22:33.162759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:33.162775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.162802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:33.162825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.162852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:33.162868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.162902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:33.162919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.162945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:33.162961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.162988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:33.163004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:33.163700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:33.163716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.517974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.517990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.518026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.518062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.518130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.518167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.518204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.681 [2024-07-15 07:22:46.518242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:46.518278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:46.518330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:46.518371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:46.518409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:46.518447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.681 [2024-07-15 07:22:46.518485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:58.681 [2024-07-15 07:22:46.518507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.518868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.518943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.518976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.518992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.682 [2024-07-15 07:22:46.519911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.519976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.519991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.520006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.520022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.520042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.682 [2024-07-15 07:22:46.520058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.682 [2024-07-15 07:22:46.520085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.683 [2024-07-15 07:22:46.520910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.520973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.520988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.683 [2024-07-15 07:22:46.521257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.683 [2024-07-15 07:22:46.521273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.684 [2024-07-15 07:22:46.521303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.684 [2024-07-15 07:22:46.521332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.684 [2024-07-15 07:22:46.521363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.684 [2024-07-15 07:22:46.521392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d506d0 is same with the state(5) to be set 00:18:58.684 [2024-07-15 07:22:46.521425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13592 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14056 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14064 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14072 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14088 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14096 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14104 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.521955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14120 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.521969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.521983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.521992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.522003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14128 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.522022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.522037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.522047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.522058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14136 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.522088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.522106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.522117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.522128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.522141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.522155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.522165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.522176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14152 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.522189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.522203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.522213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.684 [2024-07-15 07:22:46.522224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14160 len:8 PRP1 0x0 PRP2 0x0 00:18:58.684 [2024-07-15 07:22:46.522237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.684 [2024-07-15 07:22:46.522251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.684 [2024-07-15 07:22:46.522261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.685 [2024-07-15 07:22:46.522272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14168 len:8 PRP1 0x0 PRP2 0x0 00:18:58.685 [2024-07-15 07:22:46.522290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.685 [2024-07-15 07:22:46.522337] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d506d0 was disconnected and freed. reset controller. 00:18:58.685 [2024-07-15 07:22:46.522471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.685 [2024-07-15 07:22:46.522498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.685 [2024-07-15 07:22:46.522515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.685 [2024-07-15 07:22:46.522529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.685 [2024-07-15 07:22:46.522543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.685 [2024-07-15 07:22:46.522557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.685 [2024-07-15 07:22:46.522571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.685 [2024-07-15 07:22:46.522596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.685 [2024-07-15 07:22:46.522622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.685 [2024-07-15 07:22:46.522637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.685 [2024-07-15 07:22:46.522657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(5) to be set 00:18:58.685 [2024-07-15 07:22:46.523777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.685 [2024-07-15 07:22:46.523817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cca100 (9): Bad file descriptor 00:18:58.685 [2024-07-15 07:22:46.524247] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.685 [2024-07-15 07:22:46.524282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cca100 with addr=10.0.0.2, port=4421 00:18:58.685 [2024-07-15 07:22:46.524300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(5) to be set 00:18:58.685 [2024-07-15 07:22:46.524367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cca100 (9): Bad file descriptor 00:18:58.685 [2024-07-15 07:22:46.524404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.685 [2024-07-15 07:22:46.524420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:58.685 [2024-07-15 07:22:46.524435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.685 [2024-07-15 07:22:46.524467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.685 [2024-07-15 07:22:46.524485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.685 [2024-07-15 07:22:56.587730] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.685 Received shutdown signal, test time was about 55.683802 seconds 00:18:58.685 00:18:58.685 Latency(us) 00:18:58.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.685 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.685 Verification LBA range: start 0x0 length 0x4000 00:18:58.685 Nvme0n1 : 55.68 7178.03 28.04 0.00 0.00 17799.22 1333.06 7046430.72 00:18:58.685 =================================================================================================================== 00:18:58.685 Total : 7178.03 28.04 0.00 0.00 17799.22 1333.06 7046430.72 00:18:58.685 07:23:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.685 rmmod nvme_tcp 00:18:58.685 rmmod nvme_fabrics 00:18:58.685 rmmod nvme_keyring 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80777 ']' 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80777 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80777 ']' 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80777 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80777 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:58.685 killing process with pid 80777 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80777' 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80777 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80777 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:58.685 00:18:58.685 real 1m0.943s 00:18:58.685 user 2m49.221s 00:18:58.685 sys 0m18.812s 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:58.685 07:23:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:58.685 ************************************ 00:18:58.685 END TEST nvmf_host_multipath 00:18:58.685 ************************************ 00:18:58.974 07:23:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:58.974 07:23:07 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:58.974 07:23:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:58.974 07:23:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.974 07:23:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:58.974 ************************************ 00:18:58.974 START TEST nvmf_timeout 00:18:58.974 ************************************ 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:58.974 * Looking for test storage... 00:18:58.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:58.974 Cannot find device "nvmf_tgt_br" 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.974 Cannot find device "nvmf_tgt_br2" 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:58.974 Cannot find device "nvmf_tgt_br" 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:58.974 Cannot find device "nvmf_tgt_br2" 00:18:58.974 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.975 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:59.234 07:23:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:59.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:18:59.234 00:18:59.234 --- 10.0.0.2 ping statistics --- 00:18:59.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.234 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:59.234 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:59.234 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:59.234 00:18:59.234 --- 10.0.0.3 ping statistics --- 00:18:59.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.234 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:59.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:59.234 00:18:59.234 --- 10.0.0.1 ping statistics --- 00:18:59.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.234 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81936 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81936 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81936 ']' 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.234 07:23:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:59.492 [2024-07-15 07:23:08.197421] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:18:59.492 [2024-07-15 07:23:08.197518] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.492 [2024-07-15 07:23:08.337840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:59.493 [2024-07-15 07:23:08.406661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.493 [2024-07-15 07:23:08.406721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.493 [2024-07-15 07:23:08.406736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.493 [2024-07-15 07:23:08.406746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.493 [2024-07-15 07:23:08.406754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.493 [2024-07-15 07:23:08.406855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.493 [2024-07-15 07:23:08.406871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.493 [2024-07-15 07:23:08.439691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.429 07:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.687 [2024-07-15 07:23:09.508403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.687 07:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:00.946 Malloc0 00:19:00.946 07:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.205 07:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.463 07:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.763 [2024-07-15 07:23:10.589032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81991 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81991 /var/tmp/bdevperf.sock 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81991 ']' 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.763 07:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:01.763 [2024-07-15 07:23:10.656232] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:19:01.763 [2024-07-15 07:23:10.656320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81991 ] 00:19:02.022 [2024-07-15 07:23:10.790528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.022 [2024-07-15 07:23:10.876284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.022 [2024-07-15 07:23:10.911534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:02.955 07:23:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.955 07:23:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:02.955 07:23:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:03.521 07:23:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:03.779 NVMe0n1 00:19:03.779 07:23:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82018 00:19:03.779 07:23:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.779 07:23:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:03.779 Running I/O for 10 seconds... 00:19:04.711 07:23:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.970 [2024-07-15 07:23:13.796978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141be50 is same with [2024-07-15 07:23:13.797027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.970 [2024-07-15 07:23:13.797086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.797108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.970 [2024-07-15 07:23:13.797119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.797129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.970 [2024-07-15 07:23:13.797138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.797149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.970 [2024-07-15 07:23:13.797158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.797168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8d40 is same with the state(5) to be set 00:19:04.970 the state(5) to be set 00:19:04.970 [2024-07-15 07:23:13.797438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141be50 is same with the state(5) to be set 00:19:04.970 [2024-07-15 07:23:13.797454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141be50 is same with the state(5) to be set 00:19:04.970 [2024-07-15 07:23:13.797463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141be50 is same with the state(5) to be set 00:19:04.970 [2024-07-15 07:23:13.797472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141be50 is same with the state(5) to be set 00:19:04.970 [2024-07-15 07:23:13.798013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.970 [2024-07-15 07:23:13.798377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.970 [2024-07-15 07:23:13.798503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.970 [2024-07-15 07:23:13.798515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.798524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.798544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.798982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.798993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.971 [2024-07-15 07:23:13.799302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.971 [2024-07-15 07:23:13.799501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.971 [2024-07-15 07:23:13.799511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.799828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.799988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.799997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.972 [2024-07-15 07:23:13.800289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.972 [2024-07-15 07:23:13.800496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.972 [2024-07-15 07:23:13.800511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.973 [2024-07-15 07:23:13.800539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.973 [2024-07-15 07:23:13.800561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.973 [2024-07-15 07:23:13.800582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.973 [2024-07-15 07:23:13.800603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.973 [2024-07-15 07:23:13.800625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.973 [2024-07-15 07:23:13.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.973 [2024-07-15 07:23:13.800667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14134d0 is same with the state(5) to be set 00:19:04.973 [2024-07-15 07:23:13.800691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.800968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.800978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.800985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.800993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.801002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.801011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.801019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.801028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.801037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.801047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.801055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.801063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.801089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.801107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.801116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.801124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.801133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.801143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.801150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.801158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.801167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.801177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.801184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.801192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.801201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.801210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.973 [2024-07-15 07:23:13.801217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.973 [2024-07-15 07:23:13.801227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:19:04.973 [2024-07-15 07:23:13.801236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.973 [2024-07-15 07:23:13.801279] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14134d0 was disconnected and freed. reset controller. 00:19:04.973 [2024-07-15 07:23:13.801557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.973 [2024-07-15 07:23:13.801585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8d40 (9): Bad file descriptor 00:19:04.973 [2024-07-15 07:23:13.801728] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.973 [2024-07-15 07:23:13.801753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8d40 with addr=10.0.0.2, port=4420 00:19:04.973 [2024-07-15 07:23:13.801765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8d40 is same with the state(5) to be set 00:19:04.973 [2024-07-15 07:23:13.801784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8d40 (9): Bad file descriptor 00:19:04.973 [2024-07-15 07:23:13.801801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.973 [2024-07-15 07:23:13.801813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.973 [2024-07-15 07:23:13.801829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.973 [2024-07-15 07:23:13.813383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.973 [2024-07-15 07:23:13.813420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.973 07:23:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:06.934 [2024-07-15 07:23:15.813583] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.934 [2024-07-15 07:23:15.813673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8d40 with addr=10.0.0.2, port=4420 00:19:06.934 [2024-07-15 07:23:15.813691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8d40 is same with the state(5) to be set 00:19:06.934 [2024-07-15 07:23:15.813719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8d40 (9): Bad file descriptor 00:19:06.934 [2024-07-15 07:23:15.813739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:06.934 [2024-07-15 07:23:15.813749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:06.934 [2024-07-15 07:23:15.813760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:06.934 [2024-07-15 07:23:15.813788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.934 [2024-07-15 07:23:15.813800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:06.934 07:23:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:06.934 07:23:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:06.934 07:23:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:07.209 07:23:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:07.209 07:23:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:07.209 07:23:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:07.209 07:23:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:07.467 07:23:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:07.467 07:23:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:09.362 [2024-07-15 07:23:17.813969] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.362 [2024-07-15 07:23:17.814048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c8d40 with addr=10.0.0.2, port=4420 00:19:09.362 [2024-07-15 07:23:17.814090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8d40 is same with the state(5) to be set 00:19:09.362 [2024-07-15 07:23:17.814121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8d40 (9): Bad file descriptor 00:19:09.362 [2024-07-15 07:23:17.814150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:09.362 [2024-07-15 07:23:17.814167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:09.362 [2024-07-15 07:23:17.814185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.362 [2024-07-15 07:23:17.814226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:09.362 [2024-07-15 07:23:17.814241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:11.254 [2024-07-15 07:23:19.814287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.254 [2024-07-15 07:23:19.814381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.254 [2024-07-15 07:23:19.814405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:11.254 [2024-07-15 07:23:19.814424] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:11.254 [2024-07-15 07:23:19.814466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.184 00:19:12.184 Latency(us) 00:19:12.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.184 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.184 Verification LBA range: start 0x0 length 0x4000 00:19:12.184 NVMe0n1 : 8.18 1180.26 4.61 15.66 0.00 106859.17 3798.11 7015926.69 00:19:12.184 =================================================================================================================== 00:19:12.184 Total : 1180.26 4.61 15.66 0.00 106859.17 3798.11 7015926.69 00:19:12.184 0 00:19:12.441 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:12.441 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:12.441 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:13.007 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:13.007 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:13.007 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:13.007 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82018 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81991 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81991 ']' 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81991 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.265 07:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81991 00:19:13.265 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:13.265 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:13.265 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81991' 00:19:13.265 killing process with pid 81991 00:19:13.265 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81991 00:19:13.265 Received shutdown signal, test time was about 9.375571 seconds 00:19:13.265 00:19:13.265 Latency(us) 00:19:13.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.265 =================================================================================================================== 00:19:13.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.265 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81991 00:19:13.265 07:23:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.522 [2024-07-15 07:23:22.417658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82135 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82135 /var/tmp/bdevperf.sock 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82135 ']' 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.522 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:13.780 [2024-07-15 07:23:22.484947] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:19:13.780 [2024-07-15 07:23:22.485266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82135 ] 00:19:13.780 [2024-07-15 07:23:22.615945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.780 [2024-07-15 07:23:22.688482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.780 [2024-07-15 07:23:22.723303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:14.039 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.039 07:23:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:14.039 07:23:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:14.297 07:23:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:14.555 NVMe0n1 00:19:14.555 07:23:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82147 00:19:14.555 07:23:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.555 07:23:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:14.555 Running I/O for 10 seconds... 00:19:15.529 07:23:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.790 [2024-07-15 07:23:24.654188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.791 [2024-07-15 07:23:24.654967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.792 [2024-07-15 07:23:24.654975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.792 [2024-07-15 07:23:24.654984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.792 [2024-07-15 07:23:24.654992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.792 [2024-07-15 07:23:24.655000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14712a0 is same with the state(5) to be set 00:19:15.792 [2024-07-15 07:23:24.655750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.655980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.655989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.792 [2024-07-15 07:23:24.656403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.792 [2024-07-15 07:23:24.656414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.656988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.656997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.793 [2024-07-15 07:23:24.657217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.793 [2024-07-15 07:23:24.657229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.794 [2024-07-15 07:23:24.657749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.657988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.657997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.658008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.658017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.658029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.794 [2024-07-15 07:23:24.658039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.794 [2024-07-15 07:23:24.658051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:15.795 [2024-07-15 07:23:24.658060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.795 [2024-07-15 07:23:24.658092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54a4d0 is same with the state(5) to be set 00:19:15.795 [2024-07-15 07:23:24.658115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61728 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61736 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61744 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61752 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61760 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61768 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61776 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61784 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61792 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61800 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61808 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61816 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61824 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.658614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61832 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.658623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.658632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.658639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.672327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61840 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.672374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.672398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.672407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.672416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61848 len:8 PRP1 0x0 PRP2 0x0 00:19:15.795 [2024-07-15 07:23:24.672425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.795 [2024-07-15 07:23:24.672434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.795 [2024-07-15 07:23:24.672442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.795 [2024-07-15 07:23:24.672450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61856 len:8 PRP1 0x0 PRP2 0x0 00:19:15.796 [2024-07-15 07:23:24.672459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.796 [2024-07-15 07:23:24.672469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.796 [2024-07-15 07:23:24.672477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.796 [2024-07-15 07:23:24.672486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61864 len:8 PRP1 0x0 PRP2 0x0 00:19:15.796 [2024-07-15 07:23:24.672495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.796 [2024-07-15 07:23:24.672505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:15.796 [2024-07-15 07:23:24.672512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:15.796 [2024-07-15 07:23:24.672520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61872 len:8 PRP1 0x0 PRP2 0x0 00:19:15.796 [2024-07-15 07:23:24.672529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.796 [2024-07-15 07:23:24.672593] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x54a4d0 was disconnected and freed. reset controller. 00:19:15.796 [2024-07-15 07:23:24.672775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.796 [2024-07-15 07:23:24.672793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.796 [2024-07-15 07:23:24.672807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.796 [2024-07-15 07:23:24.672816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.796 [2024-07-15 07:23:24.672826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.796 [2024-07-15 07:23:24.672835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.796 [2024-07-15 07:23:24.672845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.796 [2024-07-15 07:23:24.672854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.796 [2024-07-15 07:23:24.672864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:15.796 [2024-07-15 07:23:24.673129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.796 [2024-07-15 07:23:24.673156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:15.796 [2024-07-15 07:23:24.673265] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.796 [2024-07-15 07:23:24.673287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ffd40 with addr=10.0.0.2, port=4420 00:19:15.796 [2024-07-15 07:23:24.673304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:15.796 [2024-07-15 07:23:24.673322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:15.796 [2024-07-15 07:23:24.673337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.796 [2024-07-15 07:23:24.673347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:15.796 [2024-07-15 07:23:24.673357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.796 [2024-07-15 07:23:24.673377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:15.796 [2024-07-15 07:23:24.673388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.796 07:23:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:16.731 [2024-07-15 07:23:25.673539] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.731 [2024-07-15 07:23:25.673635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ffd40 with addr=10.0.0.2, port=4420 00:19:16.731 [2024-07-15 07:23:25.673656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:16.731 [2024-07-15 07:23:25.673684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:16.731 [2024-07-15 07:23:25.673704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.731 [2024-07-15 07:23:25.673715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:16.731 [2024-07-15 07:23:25.673726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.731 [2024-07-15 07:23:25.673754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:16.731 [2024-07-15 07:23:25.673765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.731 07:23:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.297 [2024-07-15 07:23:25.972820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.297 07:23:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82147 00:19:17.862 [2024-07-15 07:23:26.688686] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.972 00:19:25.972 Latency(us) 00:19:25.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.972 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.972 Verification LBA range: start 0x0 length 0x4000 00:19:25.972 NVMe0n1 : 10.01 6019.63 23.51 0.00 0.00 21230.04 1370.30 3050402.91 00:19:25.972 =================================================================================================================== 00:19:25.972 Total : 6019.63 23.51 0.00 0.00 21230.04 1370.30 3050402.91 00:19:25.972 0 00:19:25.972 07:23:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82252 00:19:25.972 07:23:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.973 07:23:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:25.973 Running I/O for 10 seconds... 00:19:25.973 07:23:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.973 [2024-07-15 07:23:34.816370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.973 [2024-07-15 07:23:34.816752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.816956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.817221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.817286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.817505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.817719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.817897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.818153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.818387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.818635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.818847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.819976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.819991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.973 [2024-07-15 07:23:34.820393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.973 [2024-07-15 07:23:34.820410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.820985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.820994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.974 [2024-07-15 07:23:34.821396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.974 [2024-07-15 07:23:34.821407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.821985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.821993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.975 [2024-07-15 07:23:34.822189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.975 [2024-07-15 07:23:34.822210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.975 [2024-07-15 07:23:34.822230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.975 [2024-07-15 07:23:34.822241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.975 [2024-07-15 07:23:34.822250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.976 [2024-07-15 07:23:34.822507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.976 [2024-07-15 07:23:34.822528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54d8f0 is same with the state(5) to be set 00:19:25.976 [2024-07-15 07:23:34.822552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.976 [2024-07-15 07:23:34.822560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.976 [2024-07-15 07:23:34.822568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61648 len:8 PRP1 0x0 PRP2 0x0 00:19:25.976 [2024-07-15 07:23:34.822577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822622] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x54d8f0 was disconnected and freed. reset controller. 00:19:25.976 [2024-07-15 07:23:34.822724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.976 [2024-07-15 07:23:34.822741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.976 [2024-07-15 07:23:34.822761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.976 [2024-07-15 07:23:34.822780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.976 [2024-07-15 07:23:34.822810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.976 [2024-07-15 07:23:34.822819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:25.976 [2024-07-15 07:23:34.823042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.976 [2024-07-15 07:23:34.823083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:25.976 [2024-07-15 07:23:34.823181] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.976 [2024-07-15 07:23:34.823203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ffd40 with addr=10.0.0.2, port=4420 00:19:25.976 [2024-07-15 07:23:34.823213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:25.976 [2024-07-15 07:23:34.823232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:25.976 [2024-07-15 07:23:34.823248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.976 [2024-07-15 07:23:34.823257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:25.976 [2024-07-15 07:23:34.823268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.976 [2024-07-15 07:23:34.823288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.976 [2024-07-15 07:23:34.823298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.976 07:23:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:26.911 [2024-07-15 07:23:35.823449] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.911 [2024-07-15 07:23:35.823525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ffd40 with addr=10.0.0.2, port=4420 00:19:26.912 [2024-07-15 07:23:35.823543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:26.912 [2024-07-15 07:23:35.823572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:26.912 [2024-07-15 07:23:35.823592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.912 [2024-07-15 07:23:35.823602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.912 [2024-07-15 07:23:35.823613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.912 [2024-07-15 07:23:35.823639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.912 [2024-07-15 07:23:35.823650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.287 [2024-07-15 07:23:36.823797] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.287 [2024-07-15 07:23:36.823874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ffd40 with addr=10.0.0.2, port=4420 00:19:28.287 [2024-07-15 07:23:36.823892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:28.287 [2024-07-15 07:23:36.823920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:28.287 [2024-07-15 07:23:36.823940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.287 [2024-07-15 07:23:36.823950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.287 [2024-07-15 07:23:36.823961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.287 [2024-07-15 07:23:36.823987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.287 [2024-07-15 07:23:36.823999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.222 [2024-07-15 07:23:37.827988] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.222 [2024-07-15 07:23:37.828089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ffd40 with addr=10.0.0.2, port=4420 00:19:29.222 [2024-07-15 07:23:37.828127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ffd40 is same with the state(5) to be set 00:19:29.222 [2024-07-15 07:23:37.828428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ffd40 (9): Bad file descriptor 00:19:29.222 [2024-07-15 07:23:37.828727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.222 [2024-07-15 07:23:37.828754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:29.222 [2024-07-15 07:23:37.828766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.222 [2024-07-15 07:23:37.832839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.222 [2024-07-15 07:23:37.832885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.222 07:23:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.222 [2024-07-15 07:23:38.131458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.222 07:23:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82252 00:19:30.159 [2024-07-15 07:23:38.871870] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.468 00:19:35.468 Latency(us) 00:19:35.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.468 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.468 Verification LBA range: start 0x0 length 0x4000 00:19:35.468 NVMe0n1 : 10.01 5133.49 20.05 3492.87 0.00 14803.15 774.52 3019898.88 00:19:35.468 =================================================================================================================== 00:19:35.468 Total : 5133.49 20.05 3492.87 0.00 14803.15 0.00 3019898.88 00:19:35.468 0 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82135 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82135 ']' 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82135 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82135 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:35.468 killing process with pid 82135 00:19:35.468 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.468 00:19:35.468 Latency(us) 00:19:35.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.468 =================================================================================================================== 00:19:35.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82135' 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82135 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82135 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82366 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82366 /var/tmp/bdevperf.sock 00:19:35.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82366 ']' 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.468 07:23:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:35.468 [2024-07-15 07:23:43.941744] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:19:35.468 [2024-07-15 07:23:43.942058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82366 ] 00:19:35.468 [2024-07-15 07:23:44.081840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.468 [2024-07-15 07:23:44.152423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.468 [2024-07-15 07:23:44.186222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:35.468 07:23:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.468 07:23:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:35.468 07:23:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82375 00:19:35.468 07:23:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82366 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:35.468 07:23:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:35.734 07:23:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:36.028 NVMe0n1 00:19:36.028 07:23:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82411 00:19:36.028 07:23:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.028 07:23:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:36.028 Running I/O for 10 seconds... 00:19:36.961 07:23:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.223 [2024-07-15 07:23:46.096561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.223 [2024-07-15 07:23:46.096992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.224 [2024-07-15 07:23:46.097065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.224 [2024-07-15 07:23:46.097105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.224 [2024-07-15 07:23:46.097122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with [2024-07-15 07:23:46.097124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.224 the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.224 [2024-07-15 07:23:46.097139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.224 [2024-07-15 07:23:46.097157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.224 [2024-07-15 07:23:46.097156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.224 [2024-07-15 07:23:46.097172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3c00 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.097990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.224 [2024-07-15 07:23:46.098352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.098978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.099003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.099027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.099043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.099059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.099704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.099958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.100106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.100258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.100381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.100480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.100621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.100762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.101141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.101344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.101520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146eb80 is same with the state(5) to be set 00:19:37.225 [2024-07-15 07:23:46.101708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.101826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.101948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.101962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.101975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.101984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.101996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.225 [2024-07-15 07:23:46.102243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.225 [2024-07-15 07:23:46.102252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.102978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.102995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.103004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.103016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.103026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.103038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.103048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.103060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.103069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.103091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.103101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.103113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.226 [2024-07-15 07:23:46.103122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.226 [2024-07-15 07:23:46.103135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.227 [2024-07-15 07:23:46.103959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.227 [2024-07-15 07:23:46.103968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.103980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.103989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.228 [2024-07-15 07:23:46.104641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.228 [2024-07-15 07:23:46.104650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.229 [2024-07-15 07:23:46.104661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.229 [2024-07-15 07:23:46.104670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.229 [2024-07-15 07:23:46.104680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c62310 is same with the state(5) to be set 00:19:37.229 [2024-07-15 07:23:46.104693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.229 [2024-07-15 07:23:46.104701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.229 [2024-07-15 07:23:46.104711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104576 len:8 PRP1 0x0 PRP2 0x0 00:19:37.229 [2024-07-15 07:23:46.104723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.229 [2024-07-15 07:23:46.104767] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c62310 was disconnected and freed. reset controller. 00:19:37.229 [2024-07-15 07:23:46.105055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.229 [2024-07-15 07:23:46.105094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3c00 (9): Bad file descriptor 00:19:37.229 [2024-07-15 07:23:46.105212] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.229 [2024-07-15 07:23:46.105233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3c00 with addr=10.0.0.2, port=4420 00:19:37.229 [2024-07-15 07:23:46.105245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3c00 is same with the state(5) to be set 00:19:37.229 [2024-07-15 07:23:46.105263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3c00 (9): Bad file descriptor 00:19:37.229 [2024-07-15 07:23:46.105279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.229 [2024-07-15 07:23:46.105288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:37.229 [2024-07-15 07:23:46.105298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.229 [2024-07-15 07:23:46.105318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.229 [2024-07-15 07:23:46.105329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.229 07:23:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82411 00:19:39.761 [2024-07-15 07:23:48.105667] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.761 [2024-07-15 07:23:48.105746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3c00 with addr=10.0.0.2, port=4420 00:19:39.761 [2024-07-15 07:23:48.105764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3c00 is same with the state(5) to be set 00:19:39.761 [2024-07-15 07:23:48.105793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3c00 (9): Bad file descriptor 00:19:39.761 [2024-07-15 07:23:48.105827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.761 [2024-07-15 07:23:48.105839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:39.761 [2024-07-15 07:23:48.105849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.761 [2024-07-15 07:23:48.105882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:39.761 [2024-07-15 07:23:48.105893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.662 [2024-07-15 07:23:50.106097] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.662 [2024-07-15 07:23:50.106191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf3c00 with addr=10.0.0.2, port=4420 00:19:41.662 [2024-07-15 07:23:50.106220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3c00 is same with the state(5) to be set 00:19:41.662 [2024-07-15 07:23:50.106258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3c00 (9): Bad file descriptor 00:19:41.662 [2024-07-15 07:23:50.106289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:41.662 [2024-07-15 07:23:50.106306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:41.662 [2024-07-15 07:23:50.106324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.662 [2024-07-15 07:23:50.106365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.662 [2024-07-15 07:23:50.106385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.559 [2024-07-15 07:23:52.106490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.559 [2024-07-15 07:23:52.106563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.559 [2024-07-15 07:23:52.106578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:43.559 [2024-07-15 07:23:52.106588] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:43.559 [2024-07-15 07:23:52.106616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.491 00:19:44.491 Latency(us) 00:19:44.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.491 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:44.491 NVMe0n1 : 8.16 1976.93 7.72 15.68 0.00 64307.06 1645.85 7046430.72 00:19:44.492 =================================================================================================================== 00:19:44.492 Total : 1976.93 7.72 15.68 0.00 64307.06 1645.85 7046430.72 00:19:44.492 0 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:44.492 Attaching 5 probes... 00:19:44.492 1330.883392: reset bdev controller NVMe0 00:19:44.492 1330.970859: reconnect bdev controller NVMe0 00:19:44.492 3331.333763: reconnect delay bdev controller NVMe0 00:19:44.492 3331.362637: reconnect bdev controller NVMe0 00:19:44.492 5331.781040: reconnect delay bdev controller NVMe0 00:19:44.492 5331.805143: reconnect bdev controller NVMe0 00:19:44.492 7332.293385: reconnect delay bdev controller NVMe0 00:19:44.492 7332.321496: reconnect bdev controller NVMe0 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82375 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82366 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82366 ']' 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82366 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82366 00:19:44.492 killing process with pid 82366 00:19:44.492 Received shutdown signal, test time was about 8.220400 seconds 00:19:44.492 00:19:44.492 Latency(us) 00:19:44.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.492 =================================================================================================================== 00:19:44.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82366' 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82366 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82366 00:19:44.492 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.750 rmmod nvme_tcp 00:19:44.750 rmmod nvme_fabrics 00:19:44.750 rmmod nvme_keyring 00:19:44.750 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81936 ']' 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81936 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81936 ']' 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81936 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81936 00:19:45.008 killing process with pid 81936 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81936' 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81936 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81936 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:45.008 ************************************ 00:19:45.008 END TEST nvmf_timeout 00:19:45.008 ************************************ 00:19:45.008 00:19:45.008 real 0m46.278s 00:19:45.008 user 2m16.336s 00:19:45.008 sys 0m5.455s 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.008 07:23:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.266 07:23:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:45.266 07:23:53 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:19:45.266 07:23:53 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:19:45.266 07:23:53 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.266 07:23:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.266 07:23:54 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:19:45.266 ************************************ 00:19:45.266 END TEST nvmf_tcp 00:19:45.266 ************************************ 00:19:45.266 00:19:45.266 real 12m20.586s 00:19:45.266 user 30m16.014s 00:19:45.266 sys 3m1.903s 00:19:45.266 07:23:54 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.266 07:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.266 07:23:54 -- common/autotest_common.sh@1142 -- # return 0 00:19:45.266 07:23:54 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:19:45.266 07:23:54 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:45.266 07:23:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:45.266 07:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.266 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:19:45.266 ************************************ 00:19:45.266 START TEST nvmf_dif 00:19:45.266 ************************************ 00:19:45.266 07:23:54 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:45.267 * Looking for test storage... 00:19:45.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:45.267 07:23:54 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.267 07:23:54 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.267 07:23:54 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.267 07:23:54 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.267 07:23:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.267 07:23:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.267 07:23:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.267 07:23:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:45.267 07:23:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.267 07:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:45.267 07:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:45.267 07:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:45.267 07:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:45.267 07:23:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.267 07:23:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:45.267 07:23:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:45.267 Cannot find device "nvmf_tgt_br" 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.267 Cannot find device "nvmf_tgt_br2" 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:45.267 07:23:54 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:45.267 Cannot find device "nvmf_tgt_br" 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:45.525 Cannot find device "nvmf_tgt_br2" 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.525 07:23:54 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.783 07:23:54 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:45.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:19:45.783 00:19:45.783 --- 10.0.0.2 ping statistics --- 00:19:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.783 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:45.783 07:23:54 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:45.783 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.783 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:45.783 00:19:45.783 --- 10.0.0.3 ping statistics --- 00:19:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.783 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:45.783 07:23:54 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:19:45.783 00:19:45.783 --- 10.0.0.1 ping statistics --- 00:19:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.783 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:19:45.783 07:23:54 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.783 07:23:54 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:45.783 07:23:54 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:45.783 07:23:54 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:46.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.042 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:46.042 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.042 07:23:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:46.042 07:23:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82845 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:46.042 07:23:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82845 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 82845 ']' 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.042 07:23:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:46.042 [2024-07-15 07:23:54.918058] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:19:46.042 [2024-07-15 07:23:54.918695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.300 [2024-07-15 07:23:55.054904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.300 [2024-07-15 07:23:55.112333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.300 [2024-07-15 07:23:55.112388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.300 [2024-07-15 07:23:55.112400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.300 [2024-07-15 07:23:55.112409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.300 [2024-07-15 07:23:55.112417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.300 [2024-07-15 07:23:55.112443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.300 [2024-07-15 07:23:55.141296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:19:46.300 07:23:55 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:46.300 07:23:55 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.300 07:23:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:46.300 07:23:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:46.300 [2024-07-15 07:23:55.227115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.300 07:23:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.300 07:23:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:46.300 ************************************ 00:19:46.300 START TEST fio_dif_1_default 00:19:46.300 ************************************ 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:46.300 bdev_null0 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.300 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:46.559 [2024-07-15 07:23:55.271223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:46.559 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.559 { 00:19:46.559 "params": { 00:19:46.560 "name": "Nvme$subsystem", 00:19:46.560 "trtype": "$TEST_TRANSPORT", 00:19:46.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.560 "adrfam": "ipv4", 00:19:46.560 "trsvcid": "$NVMF_PORT", 00:19:46.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.560 "hdgst": ${hdgst:-false}, 00:19:46.560 "ddgst": ${ddgst:-false} 00:19:46.560 }, 00:19:46.560 "method": "bdev_nvme_attach_controller" 00:19:46.560 } 00:19:46.560 EOF 00:19:46.560 )") 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.560 "params": { 00:19:46.560 "name": "Nvme0", 00:19:46.560 "trtype": "tcp", 00:19:46.560 "traddr": "10.0.0.2", 00:19:46.560 "adrfam": "ipv4", 00:19:46.560 "trsvcid": "4420", 00:19:46.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.560 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:46.560 "hdgst": false, 00:19:46.560 "ddgst": false 00:19:46.560 }, 00:19:46.560 "method": "bdev_nvme_attach_controller" 00:19:46.560 }' 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:46.560 07:23:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.560 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:46.560 fio-3.35 00:19:46.560 Starting 1 thread 00:19:58.758 00:19:58.758 filename0: (groupid=0, jobs=1): err= 0: pid=82903: Mon Jul 15 07:24:05 2024 00:19:58.758 read: IOPS=8099, BW=31.6MiB/s (33.2MB/s)(316MiB/10001msec) 00:19:58.758 slat (usec): min=5, max=137, avg= 9.39, stdev= 2.90 00:19:58.758 clat (usec): min=406, max=4259, avg=465.85, stdev=53.33 00:19:58.758 lat (usec): min=414, max=4289, avg=475.24, stdev=53.89 00:19:58.758 clat percentiles (usec): 00:19:58.758 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 437], 00:19:58.758 | 30.00th=[ 445], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 461], 00:19:58.758 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 529], 95.00th=[ 553], 00:19:58.758 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 725], 99.95th=[ 816], 00:19:58.758 | 99.99th=[ 1090] 00:19:58.758 bw ( KiB/s): min=28288, max=33984, per=99.92%, avg=32372.21, stdev=1448.99, samples=19 00:19:58.758 iops : min= 7072, max= 8496, avg=8093.05, stdev=362.25, samples=19 00:19:58.758 lat (usec) : 500=85.47%, 750=14.45%, 1000=0.07% 00:19:58.758 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:19:58.758 cpu : usr=83.63%, sys=14.28%, ctx=32, majf=0, minf=0 00:19:58.758 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.758 issued rwts: total=81000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.758 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:58.758 00:19:58.758 Run status group 0 (all jobs): 00:19:58.758 READ: bw=31.6MiB/s (33.2MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=316MiB (332MB), run=10001-10001msec 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.758 00:19:58.758 real 0m10.881s 00:19:58.758 user 0m8.943s 00:19:58.758 sys 0m1.646s 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:58.758 ************************************ 00:19:58.758 END TEST fio_dif_1_default 00:19:58.758 ************************************ 00:19:58.758 07:24:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:58.758 07:24:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:58.758 07:24:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:58.758 07:24:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.758 07:24:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:58.758 ************************************ 00:19:58.758 START TEST fio_dif_1_multi_subsystems 00:19:58.758 ************************************ 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.758 bdev_null0 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.758 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.759 [2024-07-15 07:24:06.200439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.759 bdev_null1 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.759 { 00:19:58.759 "params": { 00:19:58.759 "name": "Nvme$subsystem", 00:19:58.759 "trtype": "$TEST_TRANSPORT", 00:19:58.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.759 "adrfam": "ipv4", 00:19:58.759 "trsvcid": "$NVMF_PORT", 00:19:58.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.759 "hdgst": ${hdgst:-false}, 00:19:58.759 "ddgst": ${ddgst:-false} 00:19:58.759 }, 00:19:58.759 "method": "bdev_nvme_attach_controller" 00:19:58.759 } 00:19:58.759 EOF 00:19:58.759 )") 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.759 { 00:19:58.759 "params": { 00:19:58.759 "name": "Nvme$subsystem", 00:19:58.759 "trtype": "$TEST_TRANSPORT", 00:19:58.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.759 "adrfam": "ipv4", 00:19:58.759 "trsvcid": "$NVMF_PORT", 00:19:58.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.759 "hdgst": ${hdgst:-false}, 00:19:58.759 "ddgst": ${ddgst:-false} 00:19:58.759 }, 00:19:58.759 "method": "bdev_nvme_attach_controller" 00:19:58.759 } 00:19:58.759 EOF 00:19:58.759 )") 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:58.759 "params": { 00:19:58.759 "name": "Nvme0", 00:19:58.759 "trtype": "tcp", 00:19:58.759 "traddr": "10.0.0.2", 00:19:58.759 "adrfam": "ipv4", 00:19:58.759 "trsvcid": "4420", 00:19:58.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:58.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:58.759 "hdgst": false, 00:19:58.759 "ddgst": false 00:19:58.759 }, 00:19:58.759 "method": "bdev_nvme_attach_controller" 00:19:58.759 },{ 00:19:58.759 "params": { 00:19:58.759 "name": "Nvme1", 00:19:58.759 "trtype": "tcp", 00:19:58.759 "traddr": "10.0.0.2", 00:19:58.759 "adrfam": "ipv4", 00:19:58.759 "trsvcid": "4420", 00:19:58.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.759 "hdgst": false, 00:19:58.759 "ddgst": false 00:19:58.759 }, 00:19:58.759 "method": "bdev_nvme_attach_controller" 00:19:58.759 }' 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:58.759 07:24:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:58.759 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:58.760 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:58.760 fio-3.35 00:19:58.760 Starting 2 threads 00:20:08.723 00:20:08.723 filename0: (groupid=0, jobs=1): err= 0: pid=83062: Mon Jul 15 07:24:16 2024 00:20:08.723 read: IOPS=4123, BW=16.1MiB/s (16.9MB/s)(161MiB/10001msec) 00:20:08.723 slat (nsec): min=4354, max=75617, avg=14300.56, stdev=5576.01 00:20:08.723 clat (usec): min=529, max=6963, avg=929.82, stdev=577.22 00:20:08.723 lat (usec): min=559, max=6988, avg=944.12, stdev=577.69 00:20:08.723 clat percentiles (usec): 00:20:08.723 | 1.00th=[ 709], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 783], 00:20:08.723 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 840], 00:20:08.723 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 1057], 95.00th=[ 1123], 00:20:08.723 | 99.00th=[ 4752], 99.50th=[ 5211], 99.90th=[ 5997], 99.95th=[ 5997], 00:20:08.723 | 99.99th=[ 6652] 00:20:08.723 bw ( KiB/s): min= 9184, max=18976, per=50.91%, avg=16845.47, stdev=2846.19, samples=19 00:20:08.723 iops : min= 2296, max= 4744, avg=4211.37, stdev=711.55, samples=19 00:20:08.723 lat (usec) : 750=10.92%, 1000=76.85% 00:20:08.723 lat (msec) : 2=10.04%, 4=0.53%, 10=1.66% 00:20:08.723 cpu : usr=89.20%, sys=9.19%, ctx=13, majf=0, minf=9 00:20:08.723 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.723 issued rwts: total=41236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.723 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:08.723 filename1: (groupid=0, jobs=1): err= 0: pid=83063: Mon Jul 15 07:24:16 2024 00:20:08.723 read: IOPS=4148, BW=16.2MiB/s (17.0MB/s)(162MiB/10002msec) 00:20:08.723 slat (nsec): min=7833, max=50460, avg=13546.49, stdev=4120.37 00:20:08.723 clat (usec): min=426, max=6059, avg=925.93, stdev=565.69 00:20:08.723 lat (usec): min=435, max=6075, avg=939.48, stdev=565.74 00:20:08.723 clat percentiles (usec): 00:20:08.723 | 1.00th=[ 750], 5.00th=[ 775], 10.00th=[ 783], 20.00th=[ 791], 00:20:08.723 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:20:08.723 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 1045], 95.00th=[ 1090], 00:20:08.723 | 99.00th=[ 4621], 99.50th=[ 5080], 99.90th=[ 5932], 99.95th=[ 5997], 00:20:08.723 | 99.99th=[ 6063] 00:20:08.723 bw ( KiB/s): min= 9184, max=18976, per=51.23%, avg=16950.16, stdev=2814.57, samples=19 00:20:08.723 iops : min= 2296, max= 4744, avg=4237.53, stdev=703.63, samples=19 00:20:08.723 lat (usec) : 500=0.21%, 750=0.73%, 1000=86.87% 00:20:08.723 lat (msec) : 2=9.97%, 4=0.63%, 10=1.58% 00:20:08.723 cpu : usr=89.43%, sys=9.07%, ctx=11, majf=0, minf=0 00:20:08.723 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.723 issued rwts: total=41492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.723 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:08.723 00:20:08.723 Run status group 0 (all jobs): 00:20:08.723 READ: bw=32.3MiB/s (33.9MB/s), 16.1MiB/s-16.2MiB/s (16.9MB/s-17.0MB/s), io=323MiB (339MB), run=10001-10002msec 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.723 00:20:08.723 real 0m11.001s 00:20:08.723 user 0m18.514s 00:20:08.723 sys 0m2.063s 00:20:08.723 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.724 07:24:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 ************************************ 00:20:08.724 END TEST fio_dif_1_multi_subsystems 00:20:08.724 ************************************ 00:20:08.724 07:24:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:08.724 07:24:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:08.724 07:24:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:08.724 07:24:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.724 07:24:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 ************************************ 00:20:08.724 START TEST fio_dif_rand_params 00:20:08.724 ************************************ 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 bdev_null0 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 [2024-07-15 07:24:17.245279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.724 { 00:20:08.724 "params": { 00:20:08.724 "name": "Nvme$subsystem", 00:20:08.724 "trtype": "$TEST_TRANSPORT", 00:20:08.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.724 "adrfam": "ipv4", 00:20:08.724 "trsvcid": "$NVMF_PORT", 00:20:08.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.724 "hdgst": ${hdgst:-false}, 00:20:08.724 "ddgst": ${ddgst:-false} 00:20:08.724 }, 00:20:08.724 "method": "bdev_nvme_attach_controller" 00:20:08.724 } 00:20:08.724 EOF 00:20:08.724 )") 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:08.724 "params": { 00:20:08.724 "name": "Nvme0", 00:20:08.724 "trtype": "tcp", 00:20:08.724 "traddr": "10.0.0.2", 00:20:08.724 "adrfam": "ipv4", 00:20:08.724 "trsvcid": "4420", 00:20:08.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.724 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:08.724 "hdgst": false, 00:20:08.724 "ddgst": false 00:20:08.724 }, 00:20:08.724 "method": "bdev_nvme_attach_controller" 00:20:08.724 }' 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:08.724 07:24:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.724 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:08.724 ... 00:20:08.724 fio-3.35 00:20:08.724 Starting 3 threads 00:20:15.281 00:20:15.281 filename0: (groupid=0, jobs=1): err= 0: pid=83218: Mon Jul 15 07:24:23 2024 00:20:15.281 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5013msec) 00:20:15.281 slat (nsec): min=4980, max=71758, avg=22602.52, stdev=11226.28 00:20:15.281 clat (usec): min=11773, max=75589, avg=12534.76, stdev=4102.19 00:20:15.281 lat (usec): min=11791, max=75640, avg=12557.36, stdev=4103.44 00:20:15.281 clat percentiles (usec): 00:20:15.281 | 1.00th=[11863], 5.00th=[11863], 10.00th=[11863], 20.00th=[11994], 00:20:15.281 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:20:15.281 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12911], 95.00th=[13304], 00:20:15.281 | 99.00th=[14746], 99.50th=[63701], 99.90th=[76022], 99.95th=[76022], 00:20:15.281 | 99.99th=[76022] 00:20:15.281 bw ( KiB/s): min=24576, max=32256, per=33.34%, avg=30489.60, stdev=2319.59, samples=10 00:20:15.281 iops : min= 192, max= 252, avg=238.20, stdev=18.12, samples=10 00:20:15.281 lat (msec) : 20=99.50%, 100=0.50% 00:20:15.281 cpu : usr=90.40%, sys=8.82%, ctx=11, majf=0, minf=9 00:20:15.281 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.281 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:15.281 filename0: (groupid=0, jobs=1): err= 0: pid=83219: Mon Jul 15 07:24:23 2024 00:20:15.281 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5011msec) 00:20:15.281 slat (nsec): min=8073, max=67545, avg=22651.02, stdev=9684.32 00:20:15.281 clat (usec): min=10939, max=75472, avg=12532.35, stdev=3747.40 00:20:15.281 lat (usec): min=10968, max=75528, avg=12555.00, stdev=3747.91 00:20:15.281 clat percentiles (usec): 00:20:15.281 | 1.00th=[11863], 5.00th=[11863], 10.00th=[11863], 20.00th=[11994], 00:20:15.281 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:20:15.281 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12911], 95.00th=[13173], 00:20:15.281 | 99.00th=[16057], 99.50th=[45351], 99.90th=[74974], 99.95th=[74974], 00:20:15.281 | 99.99th=[74974] 00:20:15.281 bw ( KiB/s): min=24625, max=32256, per=33.34%, avg=30494.50, stdev=2305.72, samples=10 00:20:15.281 iops : min= 192, max= 252, avg=238.20, stdev=18.12, samples=10 00:20:15.281 lat (msec) : 20=99.25%, 50=0.50%, 100=0.25% 00:20:15.281 cpu : usr=90.22%, sys=8.94%, ctx=10, majf=0, minf=9 00:20:15.281 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.281 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:15.281 filename0: (groupid=0, jobs=1): err= 0: pid=83220: Mon Jul 15 07:24:23 2024 00:20:15.281 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5012msec) 00:20:15.281 slat (nsec): min=7392, max=65301, avg=22909.64, stdev=9560.24 00:20:15.281 clat (usec): min=10935, max=75472, avg=12533.23, stdev=3748.60 00:20:15.281 lat (usec): min=10964, max=75528, avg=12556.14, stdev=3749.10 00:20:15.281 clat percentiles (usec): 00:20:15.281 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11863], 20.00th=[11994], 00:20:15.281 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:20:15.281 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12911], 95.00th=[13173], 00:20:15.281 | 99.00th=[16319], 99.50th=[45351], 99.90th=[74974], 99.95th=[74974], 00:20:15.281 | 99.99th=[74974] 00:20:15.281 bw ( KiB/s): min=24625, max=32256, per=33.34%, avg=30494.50, stdev=2305.72, samples=10 00:20:15.281 iops : min= 192, max= 252, avg=238.20, stdev=18.12, samples=10 00:20:15.281 lat (msec) : 20=99.25%, 50=0.50%, 100=0.25% 00:20:15.281 cpu : usr=90.86%, sys=8.20%, ctx=11, majf=0, minf=9 00:20:15.281 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.281 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:15.281 00:20:15.282 Run status group 0 (all jobs): 00:20:15.282 READ: bw=89.3MiB/s (93.7MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=448MiB (469MB), run=5011-5013msec 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 bdev_null0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 [2024-07-15 07:24:23.250773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 bdev_null1 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 bdev_null2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.282 { 00:20:15.282 "params": { 00:20:15.282 "name": "Nvme$subsystem", 00:20:15.282 "trtype": "$TEST_TRANSPORT", 00:20:15.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.282 "adrfam": "ipv4", 00:20:15.282 "trsvcid": "$NVMF_PORT", 00:20:15.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.282 "hdgst": ${hdgst:-false}, 00:20:15.282 "ddgst": ${ddgst:-false} 00:20:15.282 }, 00:20:15.282 "method": "bdev_nvme_attach_controller" 00:20:15.282 } 00:20:15.282 EOF 00:20:15.282 )") 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.282 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.282 { 00:20:15.282 "params": { 00:20:15.283 "name": "Nvme$subsystem", 00:20:15.283 "trtype": "$TEST_TRANSPORT", 00:20:15.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.283 "adrfam": "ipv4", 00:20:15.283 "trsvcid": "$NVMF_PORT", 00:20:15.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.283 "hdgst": ${hdgst:-false}, 00:20:15.283 "ddgst": ${ddgst:-false} 00:20:15.283 }, 00:20:15.283 "method": "bdev_nvme_attach_controller" 00:20:15.283 } 00:20:15.283 EOF 00:20:15.283 )") 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.283 { 00:20:15.283 "params": { 00:20:15.283 "name": "Nvme$subsystem", 00:20:15.283 "trtype": "$TEST_TRANSPORT", 00:20:15.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.283 "adrfam": "ipv4", 00:20:15.283 "trsvcid": "$NVMF_PORT", 00:20:15.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.283 "hdgst": ${hdgst:-false}, 00:20:15.283 "ddgst": ${ddgst:-false} 00:20:15.283 }, 00:20:15.283 "method": "bdev_nvme_attach_controller" 00:20:15.283 } 00:20:15.283 EOF 00:20:15.283 )") 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:15.283 "params": { 00:20:15.283 "name": "Nvme0", 00:20:15.283 "trtype": "tcp", 00:20:15.283 "traddr": "10.0.0.2", 00:20:15.283 "adrfam": "ipv4", 00:20:15.283 "trsvcid": "4420", 00:20:15.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:15.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:15.283 "hdgst": false, 00:20:15.283 "ddgst": false 00:20:15.283 }, 00:20:15.283 "method": "bdev_nvme_attach_controller" 00:20:15.283 },{ 00:20:15.283 "params": { 00:20:15.283 "name": "Nvme1", 00:20:15.283 "trtype": "tcp", 00:20:15.283 "traddr": "10.0.0.2", 00:20:15.283 "adrfam": "ipv4", 00:20:15.283 "trsvcid": "4420", 00:20:15.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.283 "hdgst": false, 00:20:15.283 "ddgst": false 00:20:15.283 }, 00:20:15.283 "method": "bdev_nvme_attach_controller" 00:20:15.283 },{ 00:20:15.283 "params": { 00:20:15.283 "name": "Nvme2", 00:20:15.283 "trtype": "tcp", 00:20:15.283 "traddr": "10.0.0.2", 00:20:15.283 "adrfam": "ipv4", 00:20:15.283 "trsvcid": "4420", 00:20:15.283 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:15.283 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:15.283 "hdgst": false, 00:20:15.283 "ddgst": false 00:20:15.283 }, 00:20:15.283 "method": "bdev_nvme_attach_controller" 00:20:15.283 }' 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:15.283 07:24:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.283 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:15.283 ... 00:20:15.283 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:15.283 ... 00:20:15.283 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:15.283 ... 00:20:15.283 fio-3.35 00:20:15.283 Starting 24 threads 00:20:27.480 00:20:27.480 filename0: (groupid=0, jobs=1): err= 0: pid=83311: Mon Jul 15 07:24:34 2024 00:20:27.480 read: IOPS=123, BW=495KiB/s (506kB/s)(4948KiB/10004msec) 00:20:27.480 slat (nsec): min=8147, max=46071, avg=17718.79, stdev=7071.64 00:20:27.480 clat (msec): min=2, max=302, avg=129.26, stdev=83.35 00:20:27.480 lat (msec): min=2, max=302, avg=129.28, stdev=83.35 00:20:27.480 clat percentiles (msec): 00:20:27.480 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 52], 20.00th=[ 66], 00:20:27.480 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 89], 60.00th=[ 112], 00:20:27.480 | 70.00th=[ 199], 80.00th=[ 218], 90.00th=[ 271], 95.00th=[ 288], 00:20:27.480 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:20:27.480 | 99.99th=[ 305] 00:20:27.480 bw ( KiB/s): min= 143, max= 960, per=3.78%, avg=447.84, stdev=275.05, samples=19 00:20:27.480 iops : min= 35, max= 240, avg=111.89, stdev=68.78, samples=19 00:20:27.480 lat (msec) : 4=1.54%, 10=2.83%, 20=1.05%, 50=3.64%, 100=46.24% 00:20:27.480 lat (msec) : 250=34.36%, 500=10.35% 00:20:27.480 cpu : usr=41.60%, sys=3.14%, ctx=1309, majf=0, minf=9 00:20:27.480 IO depths : 1=0.1%, 2=4.9%, 4=19.6%, 8=62.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:20:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 complete : 0=0.0%, 4=92.9%, 8=2.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 issued rwts: total=1237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.480 filename0: (groupid=0, jobs=1): err= 0: pid=83312: Mon Jul 15 07:24:34 2024 00:20:27.480 read: IOPS=122, BW=492KiB/s (504kB/s)(4928KiB/10018msec) 00:20:27.480 slat (usec): min=4, max=2934, avg=23.72, stdev=83.50 00:20:27.480 clat (msec): min=24, max=301, avg=129.91, stdev=81.61 00:20:27.480 lat (msec): min=24, max=301, avg=129.94, stdev=81.61 00:20:27.480 clat percentiles (msec): 00:20:27.480 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:20:27.480 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 107], 00:20:27.480 | 70.00th=[ 199], 80.00th=[ 215], 90.00th=[ 257], 95.00th=[ 292], 00:20:27.480 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.480 | 99.99th=[ 300] 00:20:27.480 bw ( KiB/s): min= 143, max= 968, per=3.95%, avg=468.32, stdev=298.55, samples=19 00:20:27.480 iops : min= 35, max= 242, avg=117.00, stdev=74.65, samples=19 00:20:27.480 lat (msec) : 50=9.25%, 100=49.43%, 250=30.93%, 500=10.39% 00:20:27.480 cpu : usr=34.36%, sys=2.53%, ctx=1296, majf=0, minf=9 00:20:27.480 IO depths : 1=0.1%, 2=2.8%, 4=11.0%, 8=71.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 complete : 0=0.0%, 4=90.3%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 issued rwts: total=1232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.480 filename0: (groupid=0, jobs=1): err= 0: pid=83313: Mon Jul 15 07:24:34 2024 00:20:27.480 read: IOPS=114, BW=460KiB/s (471kB/s)(4600KiB/10008msec) 00:20:27.480 slat (usec): min=8, max=8034, avg=37.40, stdev=354.49 00:20:27.480 clat (msec): min=8, max=299, avg=138.93, stdev=80.17 00:20:27.480 lat (msec): min=8, max=299, avg=138.97, stdev=80.16 00:20:27.480 clat percentiles (msec): 00:20:27.480 | 1.00th=[ 10], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 72], 00:20:27.480 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 124], 00:20:27.480 | 70.00th=[ 201], 80.00th=[ 226], 90.00th=[ 271], 95.00th=[ 292], 00:20:27.480 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.480 | 99.99th=[ 300] 00:20:27.480 bw ( KiB/s): min= 144, max= 896, per=3.69%, avg=437.74, stdev=255.07, samples=19 00:20:27.480 iops : min= 36, max= 224, avg=109.37, stdev=63.76, samples=19 00:20:27.480 lat (msec) : 10=1.22%, 20=0.17%, 50=1.22%, 100=49.04%, 250=37.22% 00:20:27.480 lat (msec) : 500=11.13% 00:20:27.480 cpu : usr=31.03%, sys=2.35%, ctx=872, majf=0, minf=9 00:20:27.480 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:20:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.480 filename0: (groupid=0, jobs=1): err= 0: pid=83314: Mon Jul 15 07:24:34 2024 00:20:27.480 read: IOPS=115, BW=462KiB/s (473kB/s)(4640KiB/10044msec) 00:20:27.480 slat (usec): min=5, max=5039, avg=25.00, stdev=147.61 00:20:27.480 clat (msec): min=37, max=302, avg=138.30, stdev=80.43 00:20:27.480 lat (msec): min=37, max=302, avg=138.33, stdev=80.44 00:20:27.480 clat percentiles (msec): 00:20:27.480 | 1.00th=[ 39], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 72], 00:20:27.480 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 113], 00:20:27.480 | 70.00th=[ 203], 80.00th=[ 220], 90.00th=[ 284], 95.00th=[ 296], 00:20:27.480 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 305], 00:20:27.480 | 99.99th=[ 305] 00:20:27.480 bw ( KiB/s): min= 144, max= 880, per=3.86%, avg=457.50, stdev=256.43, samples=20 00:20:27.480 iops : min= 36, max= 220, avg=114.35, stdev=64.13, samples=20 00:20:27.480 lat (msec) : 50=3.36%, 100=49.91%, 250=34.14%, 500=12.59% 00:20:27.480 cpu : usr=36.82%, sys=2.60%, ctx=1108, majf=0, minf=9 00:20:27.480 IO depths : 1=0.2%, 2=5.6%, 4=22.0%, 8=59.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:20:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 complete : 0=0.0%, 4=93.5%, 8=1.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.480 filename0: (groupid=0, jobs=1): err= 0: pid=83315: Mon Jul 15 07:24:34 2024 00:20:27.480 read: IOPS=135, BW=542KiB/s (555kB/s)(5444KiB/10044msec) 00:20:27.480 slat (usec): min=8, max=8025, avg=21.14, stdev=217.31 00:20:27.480 clat (msec): min=22, max=299, avg=117.85, stdev=70.82 00:20:27.480 lat (msec): min=22, max=299, avg=117.87, stdev=70.82 00:20:27.480 clat percentiles (msec): 00:20:27.480 | 1.00th=[ 23], 5.00th=[ 43], 10.00th=[ 50], 20.00th=[ 62], 00:20:27.480 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 108], 00:20:27.480 | 70.00th=[ 144], 80.00th=[ 194], 90.00th=[ 218], 95.00th=[ 275], 00:20:27.480 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.480 | 99.99th=[ 300] 00:20:27.480 bw ( KiB/s): min= 255, max= 968, per=4.55%, avg=539.15, stdev=279.06, samples=20 00:20:27.480 iops : min= 63, max= 242, avg=134.70, stdev=69.79, samples=20 00:20:27.480 lat (msec) : 50=10.36%, 100=48.49%, 250=35.27%, 500=5.88% 00:20:27.480 cpu : usr=30.88%, sys=2.30%, ctx=849, majf=0, minf=9 00:20:27.480 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.480 issued rwts: total=1361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.480 filename0: (groupid=0, jobs=1): err= 0: pid=83316: Mon Jul 15 07:24:34 2024 00:20:27.480 read: IOPS=118, BW=476KiB/s (487kB/s)(4780KiB/10044msec) 00:20:27.480 slat (usec): min=3, max=8058, avg=34.95, stdev=328.50 00:20:27.480 clat (msec): min=39, max=310, avg=134.13, stdev=83.25 00:20:27.480 lat (msec): min=39, max=310, avg=134.17, stdev=83.24 00:20:27.480 clat percentiles (msec): 00:20:27.480 | 1.00th=[ 41], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 71], 00:20:27.480 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 107], 00:20:27.480 | 70.00th=[ 192], 80.00th=[ 218], 90.00th=[ 288], 95.00th=[ 296], 00:20:27.480 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:20:27.480 | 99.99th=[ 309] 00:20:27.480 bw ( KiB/s): min= 143, max= 920, per=4.00%, avg=473.65, stdev=274.58, samples=20 00:20:27.480 iops : min= 35, max= 230, avg=118.35, stdev=68.71, samples=20 00:20:27.480 lat (msec) : 50=5.77%, 100=51.38%, 250=29.46%, 500=13.39% 00:20:27.480 cpu : usr=34.15%, sys=2.21%, ctx=967, majf=0, minf=9 00:20:27.480 IO depths : 1=0.1%, 2=4.9%, 4=19.3%, 8=62.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:20:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=92.7%, 8=3.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename0: (groupid=0, jobs=1): err= 0: pid=83317: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=132, BW=531KiB/s (544kB/s)(5316KiB/10004msec) 00:20:27.481 slat (nsec): min=8339, max=52332, avg=14000.54, stdev=6151.15 00:20:27.481 clat (msec): min=3, max=316, avg=120.35, stdev=84.20 00:20:27.481 lat (msec): min=3, max=316, avg=120.36, stdev=84.20 00:20:27.481 clat percentiles (msec): 00:20:27.481 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 48], 20.00th=[ 59], 00:20:27.481 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 94], 00:20:27.481 | 70.00th=[ 190], 80.00th=[ 215], 90.00th=[ 259], 95.00th=[ 288], 00:20:27.481 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 317], 00:20:27.481 | 99.99th=[ 317] 00:20:27.481 bw ( KiB/s): min= 144, max= 976, per=4.16%, avg=492.47, stdev=323.08, samples=19 00:20:27.481 iops : min= 36, max= 244, avg=123.05, stdev=80.76, samples=19 00:20:27.481 lat (msec) : 4=0.68%, 10=2.41%, 20=0.68%, 50=13.54%, 100=45.22% 00:20:27.481 lat (msec) : 250=26.64%, 500=10.84% 00:20:27.481 cpu : usr=30.82%, sys=2.32%, ctx=850, majf=0, minf=9 00:20:27.481 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename0: (groupid=0, jobs=1): err= 0: pid=83318: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=130, BW=520KiB/s (533kB/s)(5204KiB/10002msec) 00:20:27.481 slat (usec): min=4, max=3502, avg=21.94, stdev=134.50 00:20:27.481 clat (usec): min=999, max=304289, avg=122848.61, stdev=85923.75 00:20:27.481 lat (usec): min=1007, max=304334, avg=122870.54, stdev=85924.50 00:20:27.481 clat percentiles (usec): 00:20:27.481 | 1.00th=[ 1549], 5.00th=[ 3884], 10.00th=[ 46400], 20.00th=[ 60556], 00:20:27.481 | 30.00th=[ 70779], 40.00th=[ 76022], 50.00th=[ 83362], 60.00th=[ 99091], 00:20:27.481 | 70.00th=[191890], 80.00th=[214959], 90.00th=[240124], 95.00th=[291505], 00:20:27.481 | 99.00th=[304088], 99.50th=[304088], 99.90th=[304088], 99.95th=[304088], 00:20:27.481 | 99.99th=[304088] 00:20:27.481 bw ( KiB/s): min= 143, max= 968, per=3.83%, avg=454.58, stdev=281.20, samples=19 00:20:27.481 iops : min= 35, max= 242, avg=113.58, stdev=70.31, samples=19 00:20:27.481 lat (usec) : 1000=0.08% 00:20:27.481 lat (msec) : 2=3.07%, 4=2.08%, 10=2.61%, 20=0.77%, 50=4.92% 00:20:27.481 lat (msec) : 100=46.81%, 250=29.82%, 500=9.84% 00:20:27.481 cpu : usr=34.82%, sys=2.69%, ctx=1308, majf=0, minf=9 00:20:27.481 IO depths : 1=0.2%, 2=4.3%, 4=16.8%, 8=65.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:20:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=91.9%, 8=4.3%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename1: (groupid=0, jobs=1): err= 0: pid=83319: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=113, BW=454KiB/s (465kB/s)(4544KiB/10017msec) 00:20:27.481 slat (usec): min=8, max=6819, avg=26.85, stdev=201.87 00:20:27.481 clat (msec): min=59, max=321, avg=140.74, stdev=78.86 00:20:27.481 lat (msec): min=59, max=321, avg=140.77, stdev=78.85 00:20:27.481 clat percentiles (msec): 00:20:27.481 | 1.00th=[ 64], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 74], 00:20:27.481 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 92], 60.00th=[ 128], 00:20:27.481 | 70.00th=[ 205], 80.00th=[ 218], 90.00th=[ 279], 95.00th=[ 296], 00:20:27.481 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 321], 00:20:27.481 | 99.99th=[ 321] 00:20:27.481 bw ( KiB/s): min= 143, max= 896, per=3.69%, avg=437.63, stdev=255.51, samples=19 00:20:27.481 iops : min= 35, max= 224, avg=109.32, stdev=63.90, samples=19 00:20:27.481 lat (msec) : 100=52.11%, 250=35.39%, 500=12.50% 00:20:27.481 cpu : usr=39.38%, sys=3.22%, ctx=1170, majf=0, minf=9 00:20:27.481 IO depths : 1=0.1%, 2=6.2%, 4=24.6%, 8=56.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:20:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=94.4%, 8=0.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename1: (groupid=0, jobs=1): err= 0: pid=83320: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=125, BW=502KiB/s (514kB/s)(5036KiB/10041msec) 00:20:27.481 slat (usec): min=8, max=8051, avg=38.54, stdev=339.05 00:20:27.481 clat (msec): min=23, max=304, avg=127.27, stdev=81.90 00:20:27.481 lat (msec): min=23, max=304, avg=127.30, stdev=81.90 00:20:27.481 clat percentiles (msec): 00:20:27.481 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 61], 00:20:27.481 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 104], 00:20:27.481 | 70.00th=[ 194], 80.00th=[ 215], 90.00th=[ 268], 95.00th=[ 292], 00:20:27.481 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:20:27.481 | 99.99th=[ 305] 00:20:27.481 bw ( KiB/s): min= 144, max= 944, per=4.20%, avg=497.10, stdev=294.88, samples=20 00:20:27.481 iops : min= 36, max= 236, avg=124.25, stdev=73.74, samples=20 00:20:27.481 lat (msec) : 50=9.29%, 100=50.12%, 250=29.31%, 500=11.28% 00:20:27.481 cpu : usr=35.40%, sys=2.21%, ctx=1057, majf=0, minf=9 00:20:27.481 IO depths : 1=0.1%, 2=3.0%, 4=12.0%, 8=70.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename1: (groupid=0, jobs=1): err= 0: pid=83321: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=133, BW=533KiB/s (546kB/s)(5360KiB/10049msec) 00:20:27.481 slat (usec): min=5, max=8052, avg=46.18, stdev=396.03 00:20:27.481 clat (usec): min=1523, max=305269, avg=119621.50, stdev=85698.76 00:20:27.481 lat (usec): min=1532, max=305323, avg=119667.68, stdev=85703.89 00:20:27.481 clat percentiles (usec): 00:20:27.481 | 1.00th=[ 1647], 5.00th=[ 2114], 10.00th=[ 35914], 20.00th=[ 60031], 00:20:27.481 | 30.00th=[ 71828], 40.00th=[ 71828], 50.00th=[ 83362], 60.00th=[ 95945], 00:20:27.481 | 70.00th=[191890], 80.00th=[212861], 90.00th=[250610], 95.00th=[291505], 00:20:27.481 | 99.00th=[299893], 99.50th=[299893], 99.90th=[304088], 99.95th=[304088], 00:20:27.481 | 99.99th=[304088] 00:20:27.481 bw ( KiB/s): min= 144, max= 1536, per=4.47%, avg=529.40, stdev=359.66, samples=20 00:20:27.481 iops : min= 36, max= 384, avg=132.30, stdev=89.96, samples=20 00:20:27.481 lat (msec) : 2=4.78%, 4=3.58%, 10=1.19%, 50=5.30%, 100=46.34% 00:20:27.481 lat (msec) : 250=27.16%, 500=11.64% 00:20:27.481 cpu : usr=33.35%, sys=2.40%, ctx=955, majf=0, minf=0 00:20:27.481 IO depths : 1=0.4%, 2=4.3%, 4=15.4%, 8=66.0%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=91.8%, 8=4.8%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename1: (groupid=0, jobs=1): err= 0: pid=83322: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=137, BW=549KiB/s (562kB/s)(5500KiB/10024msec) 00:20:27.481 slat (nsec): min=8011, max=86326, avg=20671.12, stdev=8927.26 00:20:27.481 clat (msec): min=15, max=299, avg=116.50, stdev=71.14 00:20:27.481 lat (msec): min=15, max=299, avg=116.52, stdev=71.14 00:20:27.481 clat percentiles (msec): 00:20:27.481 | 1.00th=[ 26], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 62], 00:20:27.481 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 106], 00:20:27.481 | 70.00th=[ 144], 80.00th=[ 194], 90.00th=[ 220], 95.00th=[ 279], 00:20:27.481 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.481 | 99.99th=[ 300] 00:20:27.481 bw ( KiB/s): min= 254, max= 1024, per=4.60%, avg=545.35, stdev=298.57, samples=20 00:20:27.481 iops : min= 63, max= 256, avg=136.25, stdev=74.59, samples=20 00:20:27.481 lat (msec) : 20=0.22%, 50=8.95%, 100=50.11%, 250=34.69%, 500=6.04% 00:20:27.481 cpu : usr=40.62%, sys=3.45%, ctx=1330, majf=0, minf=9 00:20:27.481 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename1: (groupid=0, jobs=1): err= 0: pid=83323: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=123, BW=492KiB/s (504kB/s)(4944KiB/10044msec) 00:20:27.481 slat (usec): min=4, max=5821, avg=34.07, stdev=273.91 00:20:27.481 clat (msec): min=18, max=298, avg=129.57, stdev=81.64 00:20:27.481 lat (msec): min=18, max=298, avg=129.60, stdev=81.63 00:20:27.481 clat percentiles (msec): 00:20:27.481 | 1.00th=[ 19], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 67], 00:20:27.481 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 106], 00:20:27.481 | 70.00th=[ 197], 80.00th=[ 218], 90.00th=[ 266], 95.00th=[ 292], 00:20:27.481 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.481 | 99.99th=[ 300] 00:20:27.481 bw ( KiB/s): min= 144, max= 968, per=4.14%, avg=490.05, stdev=280.07, samples=20 00:20:27.481 iops : min= 36, max= 242, avg=122.45, stdev=70.07, samples=20 00:20:27.481 lat (msec) : 20=1.13%, 50=5.91%, 100=51.62%, 250=30.99%, 500=10.36% 00:20:27.481 cpu : usr=39.26%, sys=3.05%, ctx=1193, majf=0, minf=9 00:20:27.481 IO depths : 1=0.1%, 2=4.5%, 4=18.0%, 8=63.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:20:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 complete : 0=0.0%, 4=92.3%, 8=3.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.481 issued rwts: total=1236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.481 filename1: (groupid=0, jobs=1): err= 0: pid=83324: Mon Jul 15 07:24:34 2024 00:20:27.481 read: IOPS=122, BW=489KiB/s (501kB/s)(4896KiB/10013msec) 00:20:27.481 slat (usec): min=4, max=8048, avg=37.34, stdev=324.37 00:20:27.481 clat (msec): min=15, max=302, avg=130.67, stdev=82.51 00:20:27.481 lat (msec): min=15, max=302, avg=130.71, stdev=82.52 00:20:27.481 clat percentiles (msec): 00:20:27.482 | 1.00th=[ 30], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:20:27.482 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 107], 00:20:27.482 | 70.00th=[ 194], 80.00th=[ 215], 90.00th=[ 275], 95.00th=[ 292], 00:20:27.482 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:20:27.482 | 99.99th=[ 305] 00:20:27.482 bw ( KiB/s): min= 144, max= 976, per=3.91%, avg=463.05, stdev=290.89, samples=19 00:20:27.482 iops : min= 36, max= 244, avg=115.74, stdev=72.69, samples=19 00:20:27.482 lat (msec) : 20=0.49%, 50=9.15%, 100=47.55%, 250=32.03%, 500=10.78% 00:20:27.482 cpu : usr=31.77%, sys=2.07%, ctx=992, majf=0, minf=9 00:20:27.482 IO depths : 1=0.1%, 2=4.1%, 4=16.3%, 8=65.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:20:27.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 complete : 0=0.0%, 4=91.7%, 8=4.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 issued rwts: total=1224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.482 filename1: (groupid=0, jobs=1): err= 0: pid=83325: Mon Jul 15 07:24:34 2024 00:20:27.482 read: IOPS=125, BW=502KiB/s (515kB/s)(5032KiB/10014msec) 00:20:27.482 slat (usec): min=8, max=8041, avg=26.60, stdev=319.78 00:20:27.482 clat (msec): min=19, max=299, avg=127.13, stdev=82.46 00:20:27.482 lat (msec): min=19, max=299, avg=127.16, stdev=82.46 00:20:27.482 clat percentiles (msec): 00:20:27.482 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:20:27.482 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 100], 00:20:27.482 | 70.00th=[ 203], 80.00th=[ 215], 90.00th=[ 253], 95.00th=[ 292], 00:20:27.482 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.482 | 99.99th=[ 300] 00:20:27.482 bw ( KiB/s): min= 143, max= 1048, per=4.06%, avg=481.11, stdev=317.86, samples=19 00:20:27.482 iops : min= 35, max= 262, avg=120.21, stdev=79.47, samples=19 00:20:27.482 lat (msec) : 20=0.56%, 50=10.41%, 100=50.32%, 250=28.22%, 500=10.49% 00:20:27.482 cpu : usr=30.50%, sys=2.56%, ctx=846, majf=0, minf=9 00:20:27.482 IO depths : 1=0.1%, 2=2.7%, 4=10.7%, 8=72.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:20:27.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 complete : 0=0.0%, 4=90.0%, 8=7.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 issued rwts: total=1258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.482 filename1: (groupid=0, jobs=1): err= 0: pid=83326: Mon Jul 15 07:24:34 2024 00:20:27.482 read: IOPS=123, BW=496KiB/s (507kB/s)(4984KiB/10057msec) 00:20:27.482 slat (usec): min=7, max=8054, avg=32.45, stdev=278.33 00:20:27.482 clat (msec): min=23, max=315, avg=128.69, stdev=83.30 00:20:27.482 lat (msec): min=23, max=315, avg=128.72, stdev=83.32 00:20:27.482 clat percentiles (msec): 00:20:27.482 | 1.00th=[ 24], 5.00th=[ 49], 10.00th=[ 55], 20.00th=[ 64], 00:20:27.482 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 99], 00:20:27.482 | 70.00th=[ 197], 80.00th=[ 220], 90.00th=[ 279], 95.00th=[ 292], 00:20:27.482 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 317], 00:20:27.482 | 99.99th=[ 317] 00:20:27.482 bw ( KiB/s): min= 142, max= 968, per=4.17%, avg=494.65, stdev=298.23, samples=20 00:20:27.482 iops : min= 35, max= 242, avg=123.60, stdev=74.62, samples=20 00:20:27.482 lat (msec) : 50=6.26%, 100=53.85%, 250=28.33%, 500=11.56% 00:20:27.482 cpu : usr=40.21%, sys=3.46%, ctx=1489, majf=0, minf=9 00:20:27.482 IO depths : 1=0.1%, 2=4.0%, 4=16.0%, 8=66.1%, 16=13.8%, 32=0.0%, >=64=0.0% 00:20:27.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 complete : 0=0.0%, 4=91.7%, 8=4.8%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 issued rwts: total=1246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.482 filename2: (groupid=0, jobs=1): err= 0: pid=83327: Mon Jul 15 07:24:34 2024 00:20:27.482 read: IOPS=113, BW=454KiB/s (465kB/s)(4552KiB/10018msec) 00:20:27.482 slat (usec): min=8, max=8044, avg=23.49, stdev=238.07 00:20:27.482 clat (msec): min=43, max=333, avg=140.59, stdev=78.66 00:20:27.482 lat (msec): min=43, max=333, avg=140.61, stdev=78.67 00:20:27.482 clat percentiles (msec): 00:20:27.482 | 1.00th=[ 60], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 72], 00:20:27.482 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 96], 60.00th=[ 111], 00:20:27.482 | 70.00th=[ 205], 80.00th=[ 218], 90.00th=[ 279], 95.00th=[ 292], 00:20:27.482 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:20:27.482 | 99.99th=[ 334] 00:20:27.482 bw ( KiB/s): min= 143, max= 896, per=3.69%, avg=437.74, stdev=255.04, samples=19 00:20:27.482 iops : min= 35, max= 224, avg=109.37, stdev=63.77, samples=19 00:20:27.482 lat (msec) : 50=0.62%, 100=53.34%, 250=33.39%, 500=12.65% 00:20:27.482 cpu : usr=37.65%, sys=2.78%, ctx=1135, majf=0, minf=9 00:20:27.482 IO depths : 1=0.1%, 2=5.9%, 4=23.5%, 8=57.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:20:27.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 complete : 0=0.0%, 4=94.0%, 8=0.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 issued rwts: total=1138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.482 filename2: (groupid=0, jobs=1): err= 0: pid=83328: Mon Jul 15 07:24:34 2024 00:20:27.482 read: IOPS=127, BW=509KiB/s (522kB/s)(5116KiB/10044msec) 00:20:27.482 slat (usec): min=8, max=4044, avg=28.10, stdev=180.01 00:20:27.482 clat (msec): min=22, max=303, avg=125.42, stdev=82.30 00:20:27.482 lat (msec): min=22, max=304, avg=125.45, stdev=82.30 00:20:27.482 clat percentiles (msec): 00:20:27.482 | 1.00th=[ 31], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 63], 00:20:27.482 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 92], 00:20:27.482 | 70.00th=[ 197], 80.00th=[ 215], 90.00th=[ 251], 95.00th=[ 288], 00:20:27.482 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:20:27.482 | 99.99th=[ 305] 00:20:27.482 bw ( KiB/s): min= 144, max= 976, per=4.27%, avg=506.00, stdev=312.79, samples=20 00:20:27.482 iops : min= 36, max= 244, avg=126.45, stdev=78.23, samples=20 00:20:27.482 lat (msec) : 50=9.15%, 100=53.32%, 250=27.76%, 500=9.77% 00:20:27.482 cpu : usr=37.14%, sys=2.61%, ctx=1187, majf=0, minf=9 00:20:27.482 IO depths : 1=0.1%, 2=3.0%, 4=11.8%, 8=70.8%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:27.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 complete : 0=0.0%, 4=90.5%, 8=6.9%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 issued rwts: total=1279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.482 filename2: (groupid=0, jobs=1): err= 0: pid=83329: Mon Jul 15 07:24:34 2024 00:20:27.482 read: IOPS=121, BW=485KiB/s (497kB/s)(4868KiB/10031msec) 00:20:27.482 slat (usec): min=8, max=1099, avg=21.29, stdev=32.04 00:20:27.482 clat (msec): min=36, max=300, avg=131.61, stdev=80.63 00:20:27.482 lat (msec): min=36, max=300, avg=131.63, stdev=80.64 00:20:27.482 clat percentiles (msec): 00:20:27.482 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 59], 20.00th=[ 70], 00:20:27.482 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 107], 00:20:27.482 | 70.00th=[ 201], 80.00th=[ 218], 90.00th=[ 271], 95.00th=[ 288], 00:20:27.482 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.482 | 99.99th=[ 300] 00:20:27.482 bw ( KiB/s): min= 144, max= 968, per=4.07%, avg=482.05, stdev=284.97, samples=20 00:20:27.482 iops : min= 36, max= 242, avg=120.45, stdev=71.17, samples=20 00:20:27.482 lat (msec) : 50=5.34%, 100=52.83%, 250=31.31%, 500=10.52% 00:20:27.482 cpu : usr=33.57%, sys=2.27%, ctx=984, majf=0, minf=9 00:20:27.482 IO depths : 1=0.1%, 2=4.4%, 4=17.7%, 8=64.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:20:27.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 complete : 0=0.0%, 4=92.1%, 8=4.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.482 issued rwts: total=1217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.482 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.482 filename2: (groupid=0, jobs=1): err= 0: pid=83330: Mon Jul 15 07:24:34 2024 00:20:27.482 read: IOPS=123, BW=493KiB/s (505kB/s)(4944KiB/10030msec) 00:20:27.482 slat (usec): min=5, max=8035, avg=42.74, stdev=395.65 00:20:27.482 clat (msec): min=23, max=300, avg=129.40, stdev=81.20 00:20:27.483 lat (msec): min=23, max=300, avg=129.44, stdev=81.20 00:20:27.483 clat percentiles (msec): 00:20:27.483 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 64], 00:20:27.483 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 107], 00:20:27.483 | 70.00th=[ 203], 80.00th=[ 215], 90.00th=[ 264], 95.00th=[ 292], 00:20:27.483 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.483 | 99.99th=[ 300] 00:20:27.483 bw ( KiB/s): min= 144, max= 976, per=4.13%, avg=489.25, stdev=294.32, samples=20 00:20:27.483 iops : min= 36, max= 244, avg=122.25, stdev=73.50, samples=20 00:20:27.483 lat (msec) : 50=7.52%, 100=51.29%, 250=29.69%, 500=11.49% 00:20:27.483 cpu : usr=31.69%, sys=2.63%, ctx=895, majf=0, minf=9 00:20:27.483 IO depths : 1=0.1%, 2=3.6%, 4=14.5%, 8=67.7%, 16=14.1%, 32=0.0%, >=64=0.0% 00:20:27.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 complete : 0=0.0%, 4=91.3%, 8=5.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 issued rwts: total=1236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.483 filename2: (groupid=0, jobs=1): err= 0: pid=83331: Mon Jul 15 07:24:34 2024 00:20:27.483 read: IOPS=116, BW=465KiB/s (476kB/s)(4664KiB/10025msec) 00:20:27.483 slat (usec): min=8, max=4055, avg=28.95, stdev=185.48 00:20:27.483 clat (msec): min=43, max=302, avg=137.30, stdev=79.20 00:20:27.483 lat (msec): min=43, max=302, avg=137.33, stdev=79.20 00:20:27.483 clat percentiles (msec): 00:20:27.483 | 1.00th=[ 45], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 72], 00:20:27.483 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 91], 60.00th=[ 127], 00:20:27.483 | 70.00th=[ 205], 80.00th=[ 220], 90.00th=[ 268], 95.00th=[ 292], 00:20:27.483 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:20:27.483 | 99.99th=[ 305] 00:20:27.483 bw ( KiB/s): min= 144, max= 896, per=3.90%, avg=462.15, stdev=259.72, samples=20 00:20:27.483 iops : min= 36, max= 224, avg=115.45, stdev=64.86, samples=20 00:20:27.483 lat (msec) : 50=1.97%, 100=52.74%, 250=34.31%, 500=10.98% 00:20:27.483 cpu : usr=39.74%, sys=3.38%, ctx=1229, majf=0, minf=9 00:20:27.483 IO depths : 1=0.1%, 2=5.7%, 4=22.6%, 8=58.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:20:27.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 complete : 0=0.0%, 4=93.7%, 8=1.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 issued rwts: total=1166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.483 filename2: (groupid=0, jobs=1): err= 0: pid=83332: Mon Jul 15 07:24:34 2024 00:20:27.483 read: IOPS=114, BW=459KiB/s (470kB/s)(4604KiB/10039msec) 00:20:27.483 slat (usec): min=8, max=8048, avg=27.64, stdev=236.79 00:20:27.483 clat (msec): min=33, max=305, avg=139.30, stdev=78.91 00:20:27.483 lat (msec): min=33, max=305, avg=139.33, stdev=78.92 00:20:27.483 clat percentiles (msec): 00:20:27.483 | 1.00th=[ 35], 5.00th=[ 55], 10.00th=[ 64], 20.00th=[ 72], 00:20:27.483 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 118], 00:20:27.483 | 70.00th=[ 205], 80.00th=[ 218], 90.00th=[ 253], 95.00th=[ 292], 00:20:27.483 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:20:27.483 | 99.99th=[ 305] 00:20:27.483 bw ( KiB/s): min= 144, max= 880, per=3.83%, avg=453.55, stdev=243.57, samples=20 00:20:27.483 iops : min= 36, max= 220, avg=113.35, stdev=60.84, samples=20 00:20:27.483 lat (msec) : 50=3.56%, 100=47.70%, 250=37.45%, 500=11.29% 00:20:27.483 cpu : usr=31.91%, sys=2.20%, ctx=883, majf=0, minf=9 00:20:27.483 IO depths : 1=0.1%, 2=5.6%, 4=22.5%, 8=58.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:20:27.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 complete : 0=0.0%, 4=93.7%, 8=1.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 issued rwts: total=1151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.483 filename2: (groupid=0, jobs=1): err= 0: pid=83333: Mon Jul 15 07:24:34 2024 00:20:27.483 read: IOPS=114, BW=458KiB/s (469kB/s)(4600KiB/10050msec) 00:20:27.483 slat (usec): min=3, max=8060, avg=41.67, stdev=352.04 00:20:27.483 clat (msec): min=46, max=299, avg=139.48, stdev=78.33 00:20:27.483 lat (msec): min=46, max=299, avg=139.52, stdev=78.33 00:20:27.483 clat percentiles (msec): 00:20:27.483 | 1.00th=[ 55], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 74], 00:20:27.483 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 99], 60.00th=[ 114], 00:20:27.483 | 70.00th=[ 199], 80.00th=[ 220], 90.00th=[ 266], 95.00th=[ 292], 00:20:27.483 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.483 | 99.99th=[ 300] 00:20:27.483 bw ( KiB/s): min= 144, max= 784, per=3.83%, avg=454.25, stdev=236.68, samples=20 00:20:27.483 iops : min= 36, max= 196, avg=113.50, stdev=59.21, samples=20 00:20:27.483 lat (msec) : 50=0.17%, 100=50.78%, 250=37.91%, 500=11.13% 00:20:27.483 cpu : usr=42.22%, sys=3.38%, ctx=1527, majf=0, minf=9 00:20:27.483 IO depths : 1=0.1%, 2=6.3%, 4=24.7%, 8=56.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:20:27.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 complete : 0=0.0%, 4=94.4%, 8=0.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.483 filename2: (groupid=0, jobs=1): err= 0: pid=83334: Mon Jul 15 07:24:34 2024 00:20:27.483 read: IOPS=138, BW=554KiB/s (568kB/s)(5564KiB/10037msec) 00:20:27.483 slat (usec): min=6, max=6460, avg=24.38, stdev=203.89 00:20:27.483 clat (msec): min=23, max=301, avg=115.29, stdev=71.32 00:20:27.483 lat (msec): min=23, max=301, avg=115.31, stdev=71.33 00:20:27.483 clat percentiles (msec): 00:20:27.483 | 1.00th=[ 30], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 59], 00:20:27.483 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 100], 00:20:27.483 | 70.00th=[ 140], 80.00th=[ 197], 90.00th=[ 218], 95.00th=[ 279], 00:20:27.483 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:20:27.483 | 99.99th=[ 300] 00:20:27.483 bw ( KiB/s): min= 208, max= 1024, per=4.64%, avg=549.50, stdev=298.48, samples=20 00:20:27.483 iops : min= 52, max= 256, avg=137.35, stdev=74.59, samples=20 00:20:27.483 lat (msec) : 50=10.57%, 100=49.53%, 250=34.65%, 500=5.25% 00:20:27.483 cpu : usr=45.79%, sys=3.82%, ctx=1403, majf=0, minf=9 00:20:27.483 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:27.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.483 issued rwts: total=1391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.483 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:27.483 00:20:27.483 Run status group 0 (all jobs): 00:20:27.483 READ: bw=11.6MiB/s (12.1MB/s), 454KiB/s-554KiB/s (465kB/s-568kB/s), io=116MiB (122MB), run=10002-10057msec 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:27.483 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 bdev_null0 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 [2024-07-15 07:24:34.519628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 bdev_null1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.484 { 00:20:27.484 "params": { 00:20:27.484 "name": "Nvme$subsystem", 00:20:27.484 "trtype": "$TEST_TRANSPORT", 00:20:27.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.484 "adrfam": "ipv4", 00:20:27.484 "trsvcid": "$NVMF_PORT", 00:20:27.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.484 "hdgst": ${hdgst:-false}, 00:20:27.484 "ddgst": ${ddgst:-false} 00:20:27.484 }, 00:20:27.484 "method": "bdev_nvme_attach_controller" 00:20:27.484 } 00:20:27.484 EOF 00:20:27.484 )") 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.484 { 00:20:27.484 "params": { 00:20:27.484 "name": "Nvme$subsystem", 00:20:27.484 "trtype": "$TEST_TRANSPORT", 00:20:27.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.484 "adrfam": "ipv4", 00:20:27.484 "trsvcid": "$NVMF_PORT", 00:20:27.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.484 "hdgst": ${hdgst:-false}, 00:20:27.484 "ddgst": ${ddgst:-false} 00:20:27.484 }, 00:20:27.484 "method": "bdev_nvme_attach_controller" 00:20:27.484 } 00:20:27.484 EOF 00:20:27.484 )") 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:27.484 07:24:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:27.484 "params": { 00:20:27.484 "name": "Nvme0", 00:20:27.484 "trtype": "tcp", 00:20:27.484 "traddr": "10.0.0.2", 00:20:27.484 "adrfam": "ipv4", 00:20:27.484 "trsvcid": "4420", 00:20:27.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:27.484 "hdgst": false, 00:20:27.484 "ddgst": false 00:20:27.484 }, 00:20:27.485 "method": "bdev_nvme_attach_controller" 00:20:27.485 },{ 00:20:27.485 "params": { 00:20:27.485 "name": "Nvme1", 00:20:27.485 "trtype": "tcp", 00:20:27.485 "traddr": "10.0.0.2", 00:20:27.485 "adrfam": "ipv4", 00:20:27.485 "trsvcid": "4420", 00:20:27.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.485 "hdgst": false, 00:20:27.485 "ddgst": false 00:20:27.485 }, 00:20:27.485 "method": "bdev_nvme_attach_controller" 00:20:27.485 }' 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:27.485 07:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.485 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:27.485 ... 00:20:27.485 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:27.485 ... 00:20:27.485 fio-3.35 00:20:27.485 Starting 4 threads 00:20:31.669 00:20:31.669 filename0: (groupid=0, jobs=1): err= 0: pid=83460: Mon Jul 15 07:24:40 2024 00:20:31.669 read: IOPS=1940, BW=15.2MiB/s (15.9MB/s)(75.8MiB/5001msec) 00:20:31.669 slat (nsec): min=3775, max=52566, avg=16165.99, stdev=4441.67 00:20:31.669 clat (usec): min=1166, max=10618, avg=4068.71, stdev=818.76 00:20:31.669 lat (usec): min=1177, max=10634, avg=4084.87, stdev=818.41 00:20:31.669 clat percentiles (usec): 00:20:31.669 | 1.00th=[ 1680], 5.00th=[ 2671], 10.00th=[ 3195], 20.00th=[ 3425], 00:20:31.669 | 30.00th=[ 3720], 40.00th=[ 3949], 50.00th=[ 4228], 60.00th=[ 4293], 00:20:31.669 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5342], 00:20:31.669 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6915], 99.95th=[ 8455], 00:20:31.669 | 99.99th=[10683] 00:20:31.669 bw ( KiB/s): min=14608, max=17856, per=24.74%, avg=15742.33, stdev=949.41, samples=9 00:20:31.669 iops : min= 1826, max= 2232, avg=1967.78, stdev=118.68, samples=9 00:20:31.669 lat (msec) : 2=1.79%, 4=40.94%, 10=57.26%, 20=0.01% 00:20:31.669 cpu : usr=90.70%, sys=8.18%, ctx=48, majf=0, minf=9 00:20:31.669 IO depths : 1=0.1%, 2=12.1%, 4=60.2%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 issued rwts: total=9705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.669 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:31.669 filename0: (groupid=0, jobs=1): err= 0: pid=83461: Mon Jul 15 07:24:40 2024 00:20:31.669 read: IOPS=1957, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5002msec) 00:20:31.669 slat (nsec): min=4202, max=53445, avg=14972.99, stdev=4769.64 00:20:31.669 clat (usec): min=707, max=8566, avg=4040.63, stdev=1083.51 00:20:31.669 lat (usec): min=715, max=8583, avg=4055.60, stdev=1084.10 00:20:31.669 clat percentiles (usec): 00:20:31.669 | 1.00th=[ 1434], 5.00th=[ 1598], 10.00th=[ 2802], 20.00th=[ 3392], 00:20:31.669 | 30.00th=[ 3490], 40.00th=[ 3949], 50.00th=[ 4080], 60.00th=[ 4293], 00:20:31.669 | 70.00th=[ 4359], 80.00th=[ 4817], 90.00th=[ 5342], 95.00th=[ 6128], 00:20:31.669 | 99.00th=[ 6456], 99.50th=[ 6521], 99.90th=[ 7635], 99.95th=[ 8291], 00:20:31.669 | 99.99th=[ 8586] 00:20:31.669 bw ( KiB/s): min=10512, max=19840, per=24.49%, avg=15582.11, stdev=2583.07, samples=9 00:20:31.669 iops : min= 1314, max= 2480, avg=1947.67, stdev=322.88, samples=9 00:20:31.669 lat (usec) : 750=0.02%, 1000=0.07% 00:20:31.669 lat (msec) : 2=6.05%, 4=39.32%, 10=54.54% 00:20:31.669 cpu : usr=91.02%, sys=8.00%, ctx=9, majf=0, minf=10 00:20:31.669 IO depths : 1=0.1%, 2=9.9%, 4=60.6%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 issued rwts: total=9789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.669 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:31.669 filename1: (groupid=0, jobs=1): err= 0: pid=83462: Mon Jul 15 07:24:40 2024 00:20:31.669 read: IOPS=2033, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5002msec) 00:20:31.669 slat (nsec): min=4720, max=53547, avg=13625.29, stdev=5253.83 00:20:31.669 clat (usec): min=680, max=11194, avg=3890.93, stdev=945.02 00:20:31.669 lat (usec): min=689, max=11226, avg=3904.56, stdev=945.89 00:20:31.669 clat percentiles (usec): 00:20:31.669 | 1.00th=[ 1401], 5.00th=[ 1909], 10.00th=[ 2704], 20.00th=[ 3392], 00:20:31.669 | 30.00th=[ 3458], 40.00th=[ 3851], 50.00th=[ 4015], 60.00th=[ 4228], 00:20:31.669 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5276], 00:20:31.669 | 99.00th=[ 5800], 99.50th=[ 6325], 99.90th=[ 7635], 99.95th=[ 9503], 00:20:31.669 | 99.99th=[ 9503] 00:20:31.669 bw ( KiB/s): min=14480, max=18976, per=25.17%, avg=16021.33, stdev=1481.28, samples=9 00:20:31.669 iops : min= 1810, max= 2372, avg=2002.67, stdev=185.16, samples=9 00:20:31.669 lat (usec) : 750=0.08%, 1000=0.16% 00:20:31.669 lat (msec) : 2=5.06%, 4=44.83%, 10=49.86%, 20=0.01% 00:20:31.669 cpu : usr=90.72%, sys=8.22%, ctx=11, majf=0, minf=0 00:20:31.669 IO depths : 1=0.1%, 2=8.5%, 4=62.0%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 issued rwts: total=10173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.669 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:31.669 filename1: (groupid=0, jobs=1): err= 0: pid=83463: Mon Jul 15 07:24:40 2024 00:20:31.669 read: IOPS=2023, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5002msec) 00:20:31.669 slat (usec): min=7, max=669, avg=16.42, stdev= 8.11 00:20:31.669 clat (usec): min=764, max=8159, avg=3901.29, stdev=912.98 00:20:31.669 lat (usec): min=773, max=8176, avg=3917.71, stdev=912.82 00:20:31.669 clat percentiles (usec): 00:20:31.669 | 1.00th=[ 1336], 5.00th=[ 2008], 10.00th=[ 2704], 20.00th=[ 3392], 00:20:31.669 | 30.00th=[ 3458], 40.00th=[ 3851], 50.00th=[ 4015], 60.00th=[ 4228], 00:20:31.669 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5276], 00:20:31.669 | 99.00th=[ 5669], 99.50th=[ 6194], 99.90th=[ 7439], 99.95th=[ 7832], 00:20:31.669 | 99.99th=[ 7898] 00:20:31.669 bw ( KiB/s): min=14480, max=18112, per=25.52%, avg=16243.44, stdev=1276.65, samples=9 00:20:31.669 iops : min= 1810, max= 2264, avg=2030.33, stdev=159.63, samples=9 00:20:31.669 lat (usec) : 1000=0.55% 00:20:31.669 lat (msec) : 2=4.45%, 4=44.27%, 10=50.74% 00:20:31.669 cpu : usr=90.26%, sys=8.58%, ctx=5, majf=0, minf=9 00:20:31.669 IO depths : 1=0.1%, 2=8.9%, 4=61.9%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.669 issued rwts: total=10123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.669 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:31.669 00:20:31.669 Run status group 0 (all jobs): 00:20:31.669 READ: bw=62.1MiB/s (65.2MB/s), 15.2MiB/s-15.9MiB/s (15.9MB/s-16.7MB/s), io=311MiB (326MB), run=5001-5002msec 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.669 00:20:31.669 real 0m23.283s 00:20:31.669 user 2m0.816s 00:20:31.669 sys 0m10.239s 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.669 ************************************ 00:20:31.669 07:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.669 END TEST fio_dif_rand_params 00:20:31.669 ************************************ 00:20:31.669 07:24:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:31.669 07:24:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:31.669 07:24:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:31.669 07:24:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.669 07:24:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:31.669 ************************************ 00:20:31.669 START TEST fio_dif_digest 00:20:31.670 ************************************ 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.670 bdev_null0 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:31.670 [2024-07-15 07:24:40.577277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.670 { 00:20:31.670 "params": { 00:20:31.670 "name": "Nvme$subsystem", 00:20:31.670 "trtype": "$TEST_TRANSPORT", 00:20:31.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.670 "adrfam": "ipv4", 00:20:31.670 "trsvcid": "$NVMF_PORT", 00:20:31.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.670 "hdgst": ${hdgst:-false}, 00:20:31.670 "ddgst": ${ddgst:-false} 00:20:31.670 }, 00:20:31.670 "method": "bdev_nvme_attach_controller" 00:20:31.670 } 00:20:31.670 EOF 00:20:31.670 )") 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.670 "params": { 00:20:31.670 "name": "Nvme0", 00:20:31.670 "trtype": "tcp", 00:20:31.670 "traddr": "10.0.0.2", 00:20:31.670 "adrfam": "ipv4", 00:20:31.670 "trsvcid": "4420", 00:20:31.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:31.670 "hdgst": true, 00:20:31.670 "ddgst": true 00:20:31.670 }, 00:20:31.670 "method": "bdev_nvme_attach_controller" 00:20:31.670 }' 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:31.670 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:31.929 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:31.929 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:31.929 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:31.929 07:24:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.929 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:31.929 ... 00:20:31.929 fio-3.35 00:20:31.929 Starting 3 threads 00:20:44.145 00:20:44.145 filename0: (groupid=0, jobs=1): err= 0: pid=83569: Mon Jul 15 07:24:51 2024 00:20:44.145 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(269MiB/10008msec) 00:20:44.145 slat (nsec): min=8390, max=68438, avg=19817.79, stdev=7185.19 00:20:44.145 clat (usec): min=13420, max=21904, avg=13910.81, stdev=591.37 00:20:44.145 lat (usec): min=13435, max=21926, avg=13930.63, stdev=591.95 00:20:44.145 clat percentiles (usec): 00:20:44.145 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:20:44.145 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13698], 60.00th=[13829], 00:20:44.145 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14877], 00:20:44.145 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21890], 99.95th=[21890], 00:20:44.145 | 99.99th=[21890] 00:20:44.145 bw ( KiB/s): min=26112, max=28472, per=33.32%, avg=27497.20, stdev=644.49, samples=20 00:20:44.145 iops : min= 204, max= 222, avg=214.80, stdev= 5.00, samples=20 00:20:44.145 lat (msec) : 20=99.86%, 50=0.14% 00:20:44.145 cpu : usr=91.22%, sys=8.13%, ctx=132, majf=0, minf=0 00:20:44.145 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:44.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.145 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.145 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:44.145 filename0: (groupid=0, jobs=1): err= 0: pid=83570: Mon Jul 15 07:24:51 2024 00:20:44.145 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(269MiB/10007msec) 00:20:44.146 slat (nsec): min=8452, max=55290, avg=19733.81, stdev=7251.91 00:20:44.146 clat (usec): min=13136, max=21898, avg=13909.59, stdev=589.83 00:20:44.146 lat (usec): min=13147, max=21916, avg=13929.32, stdev=590.10 00:20:44.146 clat percentiles (usec): 00:20:44.146 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:20:44.146 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13698], 60.00th=[13829], 00:20:44.146 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14877], 00:20:44.146 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21890], 99.95th=[21890], 00:20:44.146 | 99.99th=[21890] 00:20:44.146 bw ( KiB/s): min=26112, max=28472, per=33.33%, avg=27499.85, stdev=641.92, samples=20 00:20:44.146 iops : min= 204, max= 222, avg=214.80, stdev= 5.00, samples=20 00:20:44.146 lat (msec) : 20=99.86%, 50=0.14% 00:20:44.146 cpu : usr=91.61%, sys=7.79%, ctx=50, majf=0, minf=0 00:20:44.146 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:44.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.146 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.146 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:44.146 filename0: (groupid=0, jobs=1): err= 0: pid=83571: Mon Jul 15 07:24:51 2024 00:20:44.146 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(269MiB/10010msec) 00:20:44.146 slat (usec): min=7, max=245, avg=19.59, stdev= 9.96 00:20:44.146 clat (usec): min=13375, max=22050, avg=13913.43, stdev=603.91 00:20:44.146 lat (usec): min=13383, max=22075, avg=13933.02, stdev=604.69 00:20:44.146 clat percentiles (usec): 00:20:44.146 | 1.00th=[13435], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:20:44.146 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13698], 60.00th=[13829], 00:20:44.146 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[14877], 00:20:44.146 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21890], 99.95th=[22152], 00:20:44.146 | 99.99th=[22152] 00:20:44.146 bw ( KiB/s): min=26112, max=28416, per=33.32%, avg=27491.55, stdev=635.93, samples=20 00:20:44.146 iops : min= 204, max= 222, avg=214.75, stdev= 4.93, samples=20 00:20:44.146 lat (msec) : 20=99.86%, 50=0.14% 00:20:44.146 cpu : usr=90.70%, sys=8.31%, ctx=111, majf=0, minf=0 00:20:44.146 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:44.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.146 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.146 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:44.146 00:20:44.146 Run status group 0 (all jobs): 00:20:44.146 READ: bw=80.6MiB/s (84.5MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=807MiB (846MB), run=10007-10010msec 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.146 00:20:44.146 real 0m10.901s 00:20:44.146 user 0m27.953s 00:20:44.146 sys 0m2.649s 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.146 07:24:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:44.146 ************************************ 00:20:44.146 END TEST fio_dif_digest 00:20:44.146 ************************************ 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:44.146 07:24:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:44.146 07:24:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:44.146 rmmod nvme_tcp 00:20:44.146 rmmod nvme_fabrics 00:20:44.146 rmmod nvme_keyring 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82845 ']' 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82845 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 82845 ']' 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 82845 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82845 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:44.146 killing process with pid 82845 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82845' 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@967 -- # kill 82845 00:20:44.146 07:24:51 nvmf_dif -- common/autotest_common.sh@972 -- # wait 82845 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:44.146 07:24:51 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:44.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:44.146 Waiting for block devices as requested 00:20:44.146 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:44.146 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:44.146 07:24:52 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:44.146 07:24:52 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:44.146 07:24:52 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.146 07:24:52 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.146 07:24:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.146 07:24:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:44.146 07:24:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.146 07:24:52 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:44.146 00:20:44.146 real 0m58.277s 00:20:44.146 user 3m42.841s 00:20:44.146 sys 0m21.331s 00:20:44.146 07:24:52 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.146 ************************************ 00:20:44.146 END TEST nvmf_dif 00:20:44.146 07:24:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:44.146 ************************************ 00:20:44.146 07:24:52 -- common/autotest_common.sh@1142 -- # return 0 00:20:44.146 07:24:52 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:44.146 07:24:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:44.146 07:24:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.146 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.146 ************************************ 00:20:44.146 START TEST nvmf_abort_qd_sizes 00:20:44.146 ************************************ 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:44.146 * Looking for test storage... 00:20:44.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.146 07:24:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:44.147 Cannot find device "nvmf_tgt_br" 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:44.147 Cannot find device "nvmf_tgt_br2" 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:44.147 Cannot find device "nvmf_tgt_br" 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:44.147 Cannot find device "nvmf_tgt_br2" 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:44.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:44.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:44.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:44.147 00:20:44.147 --- 10.0.0.2 ping statistics --- 00:20:44.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.147 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:44.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:44.147 00:20:44.147 --- 10.0.0.3 ping statistics --- 00:20:44.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.147 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:44.147 00:20:44.147 --- 10.0.0.1 ping statistics --- 00:20:44.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.147 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:44.147 07:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:44.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:44.712 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.712 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84165 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84165 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84165 ']' 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.971 07:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:44.971 [2024-07-15 07:24:53.787104] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:20:44.971 [2024-07-15 07:24:53.787192] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.971 [2024-07-15 07:24:53.920321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.229 [2024-07-15 07:24:54.011470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.229 [2024-07-15 07:24:54.011792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.229 [2024-07-15 07:24:54.012128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.229 [2024-07-15 07:24:54.012468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.229 [2024-07-15 07:24:54.012668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.229 [2024-07-15 07:24:54.012936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.230 [2024-07-15 07:24:54.013031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.230 [2024-07-15 07:24:54.013612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.230 [2024-07-15 07:24:54.013640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.230 [2024-07-15 07:24:54.050358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.230 07:24:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 ************************************ 00:20:45.488 START TEST spdk_target_abort 00:20:45.488 ************************************ 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 spdk_targetn1 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 [2024-07-15 07:24:54.276522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.488 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:45.489 [2024-07-15 07:24:54.304666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.489 07:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.790 Initializing NVMe Controllers 00:20:48.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:48.790 Initialization complete. Launching workers. 00:20:48.790 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11096, failed: 0 00:20:48.790 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1018, failed to submit 10078 00:20:48.790 success 814, unsuccess 204, failed 0 00:20:48.790 07:24:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:48.790 07:24:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:52.073 Initializing NVMe Controllers 00:20:52.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:52.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:52.073 Initialization complete. Launching workers. 00:20:52.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8884, failed: 0 00:20:52.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1155, failed to submit 7729 00:20:52.073 success 404, unsuccess 751, failed 0 00:20:52.073 07:25:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:52.073 07:25:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:55.444 Initializing NVMe Controllers 00:20:55.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:55.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:55.444 Initialization complete. Launching workers. 00:20:55.444 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30589, failed: 0 00:20:55.444 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2280, failed to submit 28309 00:20:55.444 success 406, unsuccess 1874, failed 0 00:20:55.444 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:55.444 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.444 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:55.444 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.444 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:55.444 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.444 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:55.702 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.702 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84165 00:20:55.702 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84165 ']' 00:20:55.702 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84165 00:20:55.702 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:20:55.702 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.702 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84165 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:55.976 killing process with pid 84165 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84165' 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84165 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84165 00:20:55.976 00:20:55.976 real 0m10.674s 00:20:55.976 user 0m40.559s 00:20:55.976 sys 0m2.197s 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:55.976 ************************************ 00:20:55.976 END TEST spdk_target_abort 00:20:55.976 ************************************ 00:20:55.976 07:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:55.976 07:25:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:55.976 07:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:55.976 07:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.976 07:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:55.976 ************************************ 00:20:55.976 START TEST kernel_target_abort 00:20:55.976 ************************************ 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:55.976 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:56.235 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:56.235 07:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:56.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.492 Waiting for block devices as requested 00:20:56.492 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:56.492 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:56.492 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:56.751 No valid GPT data, bailing 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:56.751 No valid GPT data, bailing 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:56.751 No valid GPT data, bailing 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:56.751 No valid GPT data, bailing 00:20:56.751 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:57.009 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 --hostid=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 -a 10.0.0.1 -t tcp -s 4420 00:20:57.010 00:20:57.010 Discovery Log Number of Records 2, Generation counter 2 00:20:57.010 =====Discovery Log Entry 0====== 00:20:57.010 trtype: tcp 00:20:57.010 adrfam: ipv4 00:20:57.010 subtype: current discovery subsystem 00:20:57.010 treq: not specified, sq flow control disable supported 00:20:57.010 portid: 1 00:20:57.010 trsvcid: 4420 00:20:57.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:57.010 traddr: 10.0.0.1 00:20:57.010 eflags: none 00:20:57.010 sectype: none 00:20:57.010 =====Discovery Log Entry 1====== 00:20:57.010 trtype: tcp 00:20:57.010 adrfam: ipv4 00:20:57.010 subtype: nvme subsystem 00:20:57.010 treq: not specified, sq flow control disable supported 00:20:57.010 portid: 1 00:20:57.010 trsvcid: 4420 00:20:57.010 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:57.010 traddr: 10.0.0.1 00:20:57.010 eflags: none 00:20:57.010 sectype: none 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:57.010 07:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:00.332 Initializing NVMe Controllers 00:21:00.332 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:00.332 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:00.332 Initialization complete. Launching workers. 00:21:00.332 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34791, failed: 0 00:21:00.332 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34791, failed to submit 0 00:21:00.332 success 0, unsuccess 34791, failed 0 00:21:00.332 07:25:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:00.332 07:25:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:03.616 Initializing NVMe Controllers 00:21:03.616 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:03.616 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:03.616 Initialization complete. Launching workers. 00:21:03.616 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68313, failed: 0 00:21:03.616 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29264, failed to submit 39049 00:21:03.616 success 0, unsuccess 29264, failed 0 00:21:03.616 07:25:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:03.616 07:25:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:06.895 Initializing NVMe Controllers 00:21:06.895 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:06.895 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:06.896 Initialization complete. Launching workers. 00:21:06.896 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76635, failed: 0 00:21:06.896 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19152, failed to submit 57483 00:21:06.896 success 0, unsuccess 19152, failed 0 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:06.896 07:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:07.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.054 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.054 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.054 00:21:09.054 real 0m12.767s 00:21:09.054 user 0m6.321s 00:21:09.054 sys 0m3.894s 00:21:09.054 07:25:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:09.054 07:25:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:09.054 ************************************ 00:21:09.054 END TEST kernel_target_abort 00:21:09.054 ************************************ 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.054 rmmod nvme_tcp 00:21:09.054 rmmod nvme_fabrics 00:21:09.054 rmmod nvme_keyring 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84165 ']' 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84165 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84165 ']' 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84165 00:21:09.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84165) - No such process 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84165 is not found' 00:21:09.054 Process with pid 84165 is not found 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:09.054 07:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:09.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.311 Waiting for block devices as requested 00:21:09.311 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:09.311 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:09.570 00:21:09.570 real 0m25.976s 00:21:09.570 user 0m47.861s 00:21:09.570 sys 0m7.408s 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:09.570 07:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:09.570 ************************************ 00:21:09.570 END TEST nvmf_abort_qd_sizes 00:21:09.570 ************************************ 00:21:09.570 07:25:18 -- common/autotest_common.sh@1142 -- # return 0 00:21:09.570 07:25:18 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:09.570 07:25:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:09.570 07:25:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.570 07:25:18 -- common/autotest_common.sh@10 -- # set +x 00:21:09.570 ************************************ 00:21:09.570 START TEST keyring_file 00:21:09.570 ************************************ 00:21:09.570 07:25:18 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:09.570 * Looking for test storage... 00:21:09.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:09.570 07:25:18 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:09.570 07:25:18 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.570 07:25:18 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.570 07:25:18 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.570 07:25:18 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.570 07:25:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.570 07:25:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.570 07:25:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.570 07:25:18 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:09.570 07:25:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.570 07:25:18 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.570 07:25:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:09.570 07:25:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:09.571 07:25:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:09.571 07:25:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:09.571 07:25:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:09.571 07:25:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:09.571 07:25:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:09.571 07:25:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:09.571 07:25:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:09.571 07:25:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:09.571 07:25:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:09.571 07:25:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:09.571 07:25:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.121n2q3S29 00:21:09.571 07:25:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:09.571 07:25:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:09.571 07:25:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:09.571 07:25:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:09.571 07:25:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:09.571 07:25:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:09.571 07:25:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.121n2q3S29 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.121n2q3S29 00:21:09.828 07:25:18 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.121n2q3S29 00:21:09.828 07:25:18 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uHYT2OEqdf 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:09.828 07:25:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:09.828 07:25:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:09.828 07:25:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:09.828 07:25:18 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:09.828 07:25:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:09.828 07:25:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uHYT2OEqdf 00:21:09.828 07:25:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uHYT2OEqdf 00:21:09.828 07:25:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uHYT2OEqdf 00:21:09.829 07:25:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=85008 00:21:09.829 07:25:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85008 00:21:09.829 07:25:18 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:09.829 07:25:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85008 ']' 00:21:09.829 07:25:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.829 07:25:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.829 07:25:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.829 07:25:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.829 07:25:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:09.829 [2024-07-15 07:25:18.683685] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:21:09.829 [2024-07-15 07:25:18.683793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85008 ] 00:21:10.086 [2024-07-15 07:25:18.822665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.086 [2024-07-15 07:25:18.895154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.086 [2024-07-15 07:25:18.929275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:10.345 07:25:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:10.345 [2024-07-15 07:25:19.069135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.345 null0 00:21:10.345 [2024-07-15 07:25:19.101064] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.345 [2024-07-15 07:25:19.101358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:10.345 [2024-07-15 07:25:19.109091] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.345 07:25:19 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:10.345 [2024-07-15 07:25:19.121084] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:10.345 request: 00:21:10.345 { 00:21:10.345 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.345 "secure_channel": false, 00:21:10.345 "listen_address": { 00:21:10.345 "trtype": "tcp", 00:21:10.345 "traddr": "127.0.0.1", 00:21:10.345 "trsvcid": "4420" 00:21:10.345 }, 00:21:10.345 "method": "nvmf_subsystem_add_listener", 00:21:10.345 "req_id": 1 00:21:10.345 } 00:21:10.345 Got JSON-RPC error response 00:21:10.345 response: 00:21:10.345 { 00:21:10.345 "code": -32602, 00:21:10.345 "message": "Invalid parameters" 00:21:10.345 } 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:10.345 07:25:19 keyring_file -- keyring/file.sh@46 -- # bperfpid=85017 00:21:10.345 07:25:19 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:10.345 07:25:19 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85017 /var/tmp/bperf.sock 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85017 ']' 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.345 07:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:10.345 [2024-07-15 07:25:19.175340] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:21:10.345 [2024-07-15 07:25:19.175424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85017 ] 00:21:10.603 [2024-07-15 07:25:19.309479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.603 [2024-07-15 07:25:19.382513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.603 [2024-07-15 07:25:19.411615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:11.538 07:25:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.538 07:25:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:11.538 07:25:20 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:11.538 07:25:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:11.812 07:25:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uHYT2OEqdf 00:21:11.812 07:25:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uHYT2OEqdf 00:21:12.070 07:25:20 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:12.070 07:25:20 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:12.070 07:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.070 07:25:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.070 07:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:12.329 07:25:21 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.121n2q3S29 == \/\t\m\p\/\t\m\p\.\1\2\1\n\2\q\3\S\2\9 ]] 00:21:12.329 07:25:21 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:12.329 07:25:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:12.329 07:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.329 07:25:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.329 07:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:12.587 07:25:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uHYT2OEqdf == \/\t\m\p\/\t\m\p\.\u\H\Y\T\2\O\E\q\d\f ]] 00:21:12.587 07:25:21 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:12.587 07:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:12.587 07:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:12.587 07:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:12.587 07:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.587 07:25:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.845 07:25:21 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:12.845 07:25:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:12.845 07:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:12.845 07:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:12.845 07:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.845 07:25:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.845 07:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:13.103 07:25:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:13.104 07:25:21 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.104 07:25:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.362 [2024-07-15 07:25:22.185627] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.362 nvme0n1 00:21:13.362 07:25:22 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:13.362 07:25:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:13.362 07:25:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.362 07:25:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.362 07:25:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.362 07:25:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:13.620 07:25:22 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:13.621 07:25:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:13.621 07:25:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.621 07:25:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:13.621 07:25:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.621 07:25:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.621 07:25:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:14.186 07:25:22 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:14.186 07:25:22 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:14.186 Running I/O for 1 seconds... 00:21:15.115 00:21:15.115 Latency(us) 00:21:15.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.115 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:15.115 nvme0n1 : 1.01 10710.42 41.84 0.00 0.00 11910.62 5987.61 19899.11 00:21:15.115 =================================================================================================================== 00:21:15.115 Total : 10710.42 41.84 0.00 0.00 11910.62 5987.61 19899.11 00:21:15.115 0 00:21:15.115 07:25:24 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:15.115 07:25:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:15.680 07:25:24 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.680 07:25:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:15.680 07:25:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.680 07:25:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:15.938 07:25:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:15.938 07:25:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:15.938 07:25:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:15.938 07:25:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:15.938 07:25:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:15.938 07:25:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:15.938 07:25:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:15.938 07:25:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:15.938 07:25:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:15.938 07:25:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:16.195 [2024-07-15 07:25:25.097189] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:16.195 [2024-07-15 07:25:25.097669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cf4f0 (107): Transport endpoint is not connected 00:21:16.195 [2024-07-15 07:25:25.098647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cf4f0 (9): Bad file descriptor 00:21:16.195 [2024-07-15 07:25:25.099643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:16.195 [2024-07-15 07:25:25.099668] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:16.195 [2024-07-15 07:25:25.099679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:16.195 request: 00:21:16.195 { 00:21:16.195 "name": "nvme0", 00:21:16.195 "trtype": "tcp", 00:21:16.195 "traddr": "127.0.0.1", 00:21:16.195 "adrfam": "ipv4", 00:21:16.195 "trsvcid": "4420", 00:21:16.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:16.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:16.195 "prchk_reftag": false, 00:21:16.195 "prchk_guard": false, 00:21:16.195 "hdgst": false, 00:21:16.195 "ddgst": false, 00:21:16.195 "psk": "key1", 00:21:16.195 "method": "bdev_nvme_attach_controller", 00:21:16.195 "req_id": 1 00:21:16.195 } 00:21:16.195 Got JSON-RPC error response 00:21:16.195 response: 00:21:16.195 { 00:21:16.195 "code": -5, 00:21:16.195 "message": "Input/output error" 00:21:16.195 } 00:21:16.195 07:25:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:16.195 07:25:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:16.195 07:25:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:16.195 07:25:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:16.195 07:25:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:16.195 07:25:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:16.195 07:25:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:16.195 07:25:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.195 07:25:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:16.196 07:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.454 07:25:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:16.454 07:25:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:16.454 07:25:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:16.454 07:25:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:16.454 07:25:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.454 07:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.454 07:25:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:16.712 07:25:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:16.712 07:25:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:16.712 07:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:17.305 07:25:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:17.305 07:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:17.305 07:25:26 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:17.305 07:25:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:17.305 07:25:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:17.868 07:25:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:17.868 07:25:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.121n2q3S29 00:21:17.868 07:25:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:17.868 07:25:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:17.868 07:25:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:17.868 07:25:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:17.868 07:25:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.868 07:25:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:17.868 07:25:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.868 07:25:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:17.868 07:25:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:18.126 [2024-07-15 07:25:26.831033] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.121n2q3S29': 0100660 00:21:18.126 [2024-07-15 07:25:26.831098] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:18.126 request: 00:21:18.126 { 00:21:18.126 "name": "key0", 00:21:18.126 "path": "/tmp/tmp.121n2q3S29", 00:21:18.126 "method": "keyring_file_add_key", 00:21:18.126 "req_id": 1 00:21:18.126 } 00:21:18.126 Got JSON-RPC error response 00:21:18.126 response: 00:21:18.126 { 00:21:18.126 "code": -1, 00:21:18.126 "message": "Operation not permitted" 00:21:18.126 } 00:21:18.126 07:25:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:18.126 07:25:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:18.126 07:25:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:18.126 07:25:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:18.126 07:25:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.121n2q3S29 00:21:18.126 07:25:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:18.126 07:25:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.121n2q3S29 00:21:18.383 07:25:27 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.121n2q3S29 00:21:18.383 07:25:27 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:18.383 07:25:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:18.383 07:25:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:18.383 07:25:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:18.383 07:25:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.383 07:25:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:18.641 07:25:27 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:18.641 07:25:27 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:18.641 07:25:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:18.641 07:25:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:18.641 07:25:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:18.641 07:25:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.641 07:25:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:18.641 07:25:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.641 07:25:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:18.641 07:25:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:18.899 [2024-07-15 07:25:27.667247] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.121n2q3S29': No such file or directory 00:21:18.899 [2024-07-15 07:25:27.667294] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:18.899 [2024-07-15 07:25:27.667322] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:18.899 [2024-07-15 07:25:27.667331] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:18.899 [2024-07-15 07:25:27.667340] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:18.899 request: 00:21:18.899 { 00:21:18.899 "name": "nvme0", 00:21:18.899 "trtype": "tcp", 00:21:18.899 "traddr": "127.0.0.1", 00:21:18.899 "adrfam": "ipv4", 00:21:18.899 "trsvcid": "4420", 00:21:18.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.899 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:18.899 "prchk_reftag": false, 00:21:18.899 "prchk_guard": false, 00:21:18.899 "hdgst": false, 00:21:18.899 "ddgst": false, 00:21:18.899 "psk": "key0", 00:21:18.899 "method": "bdev_nvme_attach_controller", 00:21:18.899 "req_id": 1 00:21:18.899 } 00:21:18.899 Got JSON-RPC error response 00:21:18.899 response: 00:21:18.899 { 00:21:18.899 "code": -19, 00:21:18.899 "message": "No such device" 00:21:18.899 } 00:21:18.899 07:25:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:18.899 07:25:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:18.899 07:25:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:18.899 07:25:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:18.899 07:25:27 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:18.899 07:25:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:19.157 07:25:27 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8N6b7iUIgh 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:19.157 07:25:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:19.157 07:25:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:19.157 07:25:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:19.157 07:25:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:19.157 07:25:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:19.157 07:25:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:19.157 07:25:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8N6b7iUIgh 00:21:19.157 07:25:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8N6b7iUIgh 00:21:19.157 07:25:28 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.8N6b7iUIgh 00:21:19.157 07:25:28 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8N6b7iUIgh 00:21:19.157 07:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8N6b7iUIgh 00:21:19.414 07:25:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:19.414 07:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:19.671 nvme0n1 00:21:19.671 07:25:28 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:19.671 07:25:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:19.671 07:25:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:19.671 07:25:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:19.671 07:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.671 07:25:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:19.928 07:25:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:19.928 07:25:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:19.928 07:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:20.186 07:25:29 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:20.186 07:25:29 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:20.186 07:25:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:20.186 07:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.186 07:25:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:20.444 07:25:29 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:20.444 07:25:29 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:20.444 07:25:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:20.444 07:25:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:20.444 07:25:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:20.444 07:25:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:20.444 07:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.702 07:25:29 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:20.702 07:25:29 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:20.702 07:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:21.269 07:25:29 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:21.269 07:25:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:21.269 07:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:21.527 07:25:30 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:21.527 07:25:30 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8N6b7iUIgh 00:21:21.527 07:25:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8N6b7iUIgh 00:21:21.785 07:25:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uHYT2OEqdf 00:21:21.785 07:25:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uHYT2OEqdf 00:21:21.785 07:25:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:21.785 07:25:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:22.351 nvme0n1 00:21:22.351 07:25:31 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:22.351 07:25:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:22.610 07:25:31 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:22.610 "subsystems": [ 00:21:22.610 { 00:21:22.610 "subsystem": "keyring", 00:21:22.610 "config": [ 00:21:22.610 { 00:21:22.610 "method": "keyring_file_add_key", 00:21:22.610 "params": { 00:21:22.610 "name": "key0", 00:21:22.610 "path": "/tmp/tmp.8N6b7iUIgh" 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "keyring_file_add_key", 00:21:22.610 "params": { 00:21:22.610 "name": "key1", 00:21:22.610 "path": "/tmp/tmp.uHYT2OEqdf" 00:21:22.610 } 00:21:22.610 } 00:21:22.610 ] 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "subsystem": "iobuf", 00:21:22.610 "config": [ 00:21:22.610 { 00:21:22.610 "method": "iobuf_set_options", 00:21:22.610 "params": { 00:21:22.610 "small_pool_count": 8192, 00:21:22.610 "large_pool_count": 1024, 00:21:22.610 "small_bufsize": 8192, 00:21:22.610 "large_bufsize": 135168 00:21:22.610 } 00:21:22.610 } 00:21:22.610 ] 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "subsystem": "sock", 00:21:22.610 "config": [ 00:21:22.610 { 00:21:22.610 "method": "sock_set_default_impl", 00:21:22.610 "params": { 00:21:22.610 "impl_name": "uring" 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "sock_impl_set_options", 00:21:22.610 "params": { 00:21:22.610 "impl_name": "ssl", 00:21:22.610 "recv_buf_size": 4096, 00:21:22.610 "send_buf_size": 4096, 00:21:22.610 "enable_recv_pipe": true, 00:21:22.610 "enable_quickack": false, 00:21:22.610 "enable_placement_id": 0, 00:21:22.610 "enable_zerocopy_send_server": true, 00:21:22.610 "enable_zerocopy_send_client": false, 00:21:22.610 "zerocopy_threshold": 0, 00:21:22.610 "tls_version": 0, 00:21:22.610 "enable_ktls": false 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "sock_impl_set_options", 00:21:22.610 "params": { 00:21:22.610 "impl_name": "posix", 00:21:22.610 "recv_buf_size": 2097152, 00:21:22.610 "send_buf_size": 2097152, 00:21:22.610 "enable_recv_pipe": true, 00:21:22.610 "enable_quickack": false, 00:21:22.610 "enable_placement_id": 0, 00:21:22.610 "enable_zerocopy_send_server": true, 00:21:22.610 "enable_zerocopy_send_client": false, 00:21:22.610 "zerocopy_threshold": 0, 00:21:22.610 "tls_version": 0, 00:21:22.610 "enable_ktls": false 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "sock_impl_set_options", 00:21:22.610 "params": { 00:21:22.610 "impl_name": "uring", 00:21:22.610 "recv_buf_size": 2097152, 00:21:22.610 "send_buf_size": 2097152, 00:21:22.610 "enable_recv_pipe": true, 00:21:22.610 "enable_quickack": false, 00:21:22.610 "enable_placement_id": 0, 00:21:22.610 "enable_zerocopy_send_server": false, 00:21:22.610 "enable_zerocopy_send_client": false, 00:21:22.610 "zerocopy_threshold": 0, 00:21:22.610 "tls_version": 0, 00:21:22.610 "enable_ktls": false 00:21:22.610 } 00:21:22.610 } 00:21:22.610 ] 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "subsystem": "vmd", 00:21:22.610 "config": [] 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "subsystem": "accel", 00:21:22.610 "config": [ 00:21:22.610 { 00:21:22.610 "method": "accel_set_options", 00:21:22.610 "params": { 00:21:22.610 "small_cache_size": 128, 00:21:22.610 "large_cache_size": 16, 00:21:22.610 "task_count": 2048, 00:21:22.610 "sequence_count": 2048, 00:21:22.610 "buf_count": 2048 00:21:22.610 } 00:21:22.610 } 00:21:22.610 ] 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "subsystem": "bdev", 00:21:22.610 "config": [ 00:21:22.610 { 00:21:22.610 "method": "bdev_set_options", 00:21:22.610 "params": { 00:21:22.610 "bdev_io_pool_size": 65535, 00:21:22.610 "bdev_io_cache_size": 256, 00:21:22.610 "bdev_auto_examine": true, 00:21:22.610 "iobuf_small_cache_size": 128, 00:21:22.610 "iobuf_large_cache_size": 16 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "bdev_raid_set_options", 00:21:22.610 "params": { 00:21:22.610 "process_window_size_kb": 1024 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "bdev_iscsi_set_options", 00:21:22.610 "params": { 00:21:22.610 "timeout_sec": 30 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "bdev_nvme_set_options", 00:21:22.610 "params": { 00:21:22.610 "action_on_timeout": "none", 00:21:22.610 "timeout_us": 0, 00:21:22.610 "timeout_admin_us": 0, 00:21:22.610 "keep_alive_timeout_ms": 10000, 00:21:22.610 "arbitration_burst": 0, 00:21:22.610 "low_priority_weight": 0, 00:21:22.610 "medium_priority_weight": 0, 00:21:22.610 "high_priority_weight": 0, 00:21:22.610 "nvme_adminq_poll_period_us": 10000, 00:21:22.610 "nvme_ioq_poll_period_us": 0, 00:21:22.610 "io_queue_requests": 512, 00:21:22.610 "delay_cmd_submit": true, 00:21:22.610 "transport_retry_count": 4, 00:21:22.610 "bdev_retry_count": 3, 00:21:22.610 "transport_ack_timeout": 0, 00:21:22.610 "ctrlr_loss_timeout_sec": 0, 00:21:22.610 "reconnect_delay_sec": 0, 00:21:22.610 "fast_io_fail_timeout_sec": 0, 00:21:22.610 "disable_auto_failback": false, 00:21:22.610 "generate_uuids": false, 00:21:22.610 "transport_tos": 0, 00:21:22.610 "nvme_error_stat": false, 00:21:22.610 "rdma_srq_size": 0, 00:21:22.610 "io_path_stat": false, 00:21:22.610 "allow_accel_sequence": false, 00:21:22.610 "rdma_max_cq_size": 0, 00:21:22.610 "rdma_cm_event_timeout_ms": 0, 00:21:22.610 "dhchap_digests": [ 00:21:22.610 "sha256", 00:21:22.610 "sha384", 00:21:22.610 "sha512" 00:21:22.610 ], 00:21:22.610 "dhchap_dhgroups": [ 00:21:22.610 "null", 00:21:22.610 "ffdhe2048", 00:21:22.610 "ffdhe3072", 00:21:22.610 "ffdhe4096", 00:21:22.610 "ffdhe6144", 00:21:22.610 "ffdhe8192" 00:21:22.610 ] 00:21:22.610 } 00:21:22.610 }, 00:21:22.610 { 00:21:22.610 "method": "bdev_nvme_attach_controller", 00:21:22.610 "params": { 00:21:22.611 "name": "nvme0", 00:21:22.611 "trtype": "TCP", 00:21:22.611 "adrfam": "IPv4", 00:21:22.611 "traddr": "127.0.0.1", 00:21:22.611 "trsvcid": "4420", 00:21:22.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:22.611 "prchk_reftag": false, 00:21:22.611 "prchk_guard": false, 00:21:22.611 "ctrlr_loss_timeout_sec": 0, 00:21:22.611 "reconnect_delay_sec": 0, 00:21:22.611 "fast_io_fail_timeout_sec": 0, 00:21:22.611 "psk": "key0", 00:21:22.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:22.611 "hdgst": false, 00:21:22.611 "ddgst": false 00:21:22.611 } 00:21:22.611 }, 00:21:22.611 { 00:21:22.611 "method": "bdev_nvme_set_hotplug", 00:21:22.611 "params": { 00:21:22.611 "period_us": 100000, 00:21:22.611 "enable": false 00:21:22.611 } 00:21:22.611 }, 00:21:22.611 { 00:21:22.611 "method": "bdev_wait_for_examine" 00:21:22.611 } 00:21:22.611 ] 00:21:22.611 }, 00:21:22.611 { 00:21:22.611 "subsystem": "nbd", 00:21:22.611 "config": [] 00:21:22.611 } 00:21:22.611 ] 00:21:22.611 }' 00:21:22.611 07:25:31 keyring_file -- keyring/file.sh@114 -- # killprocess 85017 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85017 ']' 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85017 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85017 00:21:22.611 killing process with pid 85017 00:21:22.611 Received shutdown signal, test time was about 1.000000 seconds 00:21:22.611 00:21:22.611 Latency(us) 00:21:22.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.611 =================================================================================================================== 00:21:22.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85017' 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@967 -- # kill 85017 00:21:22.611 07:25:31 keyring_file -- common/autotest_common.sh@972 -- # wait 85017 00:21:22.869 07:25:31 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:22.869 07:25:31 keyring_file -- keyring/file.sh@117 -- # bperfpid=85273 00:21:22.869 07:25:31 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85273 /var/tmp/bperf.sock 00:21:22.869 07:25:31 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85273 ']' 00:21:22.869 07:25:31 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:22.869 "subsystems": [ 00:21:22.869 { 00:21:22.869 "subsystem": "keyring", 00:21:22.869 "config": [ 00:21:22.869 { 00:21:22.869 "method": "keyring_file_add_key", 00:21:22.869 "params": { 00:21:22.869 "name": "key0", 00:21:22.869 "path": "/tmp/tmp.8N6b7iUIgh" 00:21:22.869 } 00:21:22.869 }, 00:21:22.869 { 00:21:22.869 "method": "keyring_file_add_key", 00:21:22.869 "params": { 00:21:22.869 "name": "key1", 00:21:22.869 "path": "/tmp/tmp.uHYT2OEqdf" 00:21:22.869 } 00:21:22.869 } 00:21:22.869 ] 00:21:22.869 }, 00:21:22.869 { 00:21:22.869 "subsystem": "iobuf", 00:21:22.869 "config": [ 00:21:22.869 { 00:21:22.869 "method": "iobuf_set_options", 00:21:22.869 "params": { 00:21:22.869 "small_pool_count": 8192, 00:21:22.869 "large_pool_count": 1024, 00:21:22.869 "small_bufsize": 8192, 00:21:22.869 "large_bufsize": 135168 00:21:22.869 } 00:21:22.870 } 00:21:22.870 ] 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "subsystem": "sock", 00:21:22.870 "config": [ 00:21:22.870 { 00:21:22.870 "method": "sock_set_default_impl", 00:21:22.870 "params": { 00:21:22.870 "impl_name": "uring" 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "sock_impl_set_options", 00:21:22.870 "params": { 00:21:22.870 "impl_name": "ssl", 00:21:22.870 "recv_buf_size": 4096, 00:21:22.870 "send_buf_size": 4096, 00:21:22.870 "enable_recv_pipe": true, 00:21:22.870 "enable_quickack": false, 00:21:22.870 "enable_placement_id": 0, 00:21:22.870 "enable_zerocopy_send_server": true, 00:21:22.870 "enable_zerocopy_send_client": false, 00:21:22.870 "zerocopy_threshold": 0, 00:21:22.870 "tls_version": 0, 00:21:22.870 "enable_ktls": false 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "sock_impl_set_options", 00:21:22.870 "params": { 00:21:22.870 "impl_name": "posix", 00:21:22.870 "recv_buf_size": 2097152, 00:21:22.870 "send_buf_size": 2097152, 00:21:22.870 "enable_recv_pipe": true, 00:21:22.870 "enable_quickack": false, 00:21:22.870 "enable_placement_id": 0, 00:21:22.870 "enable_zerocopy_send_server": true, 00:21:22.870 "enable_zerocopy_send_client": false, 00:21:22.870 "zerocopy_threshold": 0, 00:21:22.870 "tls_version": 0, 00:21:22.870 "enable_ktls": false 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "sock_impl_set_options", 00:21:22.870 "params": { 00:21:22.870 "impl_name": "uring", 00:21:22.870 "recv_buf_size": 2097152, 00:21:22.870 "send_buf_size": 2097152, 00:21:22.870 "enable_recv_pipe": true, 00:21:22.870 "enable_quickack": false, 00:21:22.870 "enable_placement_id": 0, 00:21:22.870 "enable_zerocopy_send_server": false, 00:21:22.870 "enable_zerocopy_send_client": false, 00:21:22.870 "zerocopy_threshold": 0, 00:21:22.870 "tls_version": 0, 00:21:22.870 "enable_ktls": false 00:21:22.870 } 00:21:22.870 } 00:21:22.870 ] 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "subsystem": "vmd", 00:21:22.870 "config": [] 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "subsystem": "accel", 00:21:22.870 "config": [ 00:21:22.870 { 00:21:22.870 "method": "accel_set_options", 00:21:22.870 "params": { 00:21:22.870 "small_cache_size": 128, 00:21:22.870 "large_cache_size": 16, 00:21:22.870 "task_count": 2048, 00:21:22.870 "sequence_count": 2048, 00:21:22.870 "buf_count": 2048 00:21:22.870 } 00:21:22.870 } 00:21:22.870 ] 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "subsystem": "bdev", 00:21:22.870 "config": [ 00:21:22.870 { 00:21:22.870 "method": "bdev_set_options", 00:21:22.870 "params": { 00:21:22.870 "bdev_io_pool_size": 65535, 00:21:22.870 "bdev_io_cache_size": 256, 00:21:22.870 "bdev_auto_examine": true, 00:21:22.870 "iobuf_small_cache_size": 128, 00:21:22.870 "iobuf_large_cache_size": 16 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "bdev_raid_set_options", 00:21:22.870 "params": { 00:21:22.870 "process_window_size_kb": 1024 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "bdev_iscsi_set_options", 00:21:22.870 "params": { 00:21:22.870 "timeout_sec": 30 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "bdev_nvme_set_options", 00:21:22.870 "params": { 00:21:22.870 "action_on_timeout": "none", 00:21:22.870 "timeout_us": 0, 00:21:22.870 "timeout_admin_us": 0, 00:21:22.870 "keep_alive_timeout_ms": 10000, 00:21:22.870 "arbitration_burst": 0, 00:21:22.870 "low_priority_weight": 0, 00:21:22.870 "medium_priority_weight": 0, 00:21:22.870 "high_priority_weight": 0, 00:21:22.870 "nvme_adminq_poll_period_us": 10000, 00:21:22.870 "nvme_ioq_poll_period_us": 0, 00:21:22.870 "io_queue_requests": 512, 00:21:22.870 "delay_cm 07:25:31 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.870 d_submit": true, 00:21:22.870 "transport_retry_count": 4, 00:21:22.870 "bdev_retry_count": 3, 00:21:22.870 "transport_ack_timeout": 0, 00:21:22.870 "ctrlr_loss_timeout_sec": 0, 00:21:22.870 "reconnect_delay_sec": 0, 00:21:22.870 "fast_io_fail_timeout_sec": 0, 00:21:22.870 "disable_auto_failback": false, 00:21:22.870 "generate_uuids": false, 00:21:22.870 "transport_tos": 0, 00:21:22.870 "nvme_error_stat": false, 00:21:22.870 "rdma_srq_size": 0, 00:21:22.870 "io_path_stat": false, 00:21:22.870 "allow_accel_sequence": false, 00:21:22.870 "rdma_max_cq_size": 0, 00:21:22.870 "rdma_cm_event_timeout_ms": 0, 00:21:22.870 "dhchap_digests": [ 00:21:22.870 "sha256", 00:21:22.870 "sha384", 00:21:22.870 "sha512" 00:21:22.870 ], 00:21:22.870 "dhchap_dhgroups": [ 00:21:22.870 "null", 00:21:22.870 "ffdhe2048", 00:21:22.870 "ffdhe3072", 00:21:22.870 "ffdhe4096", 00:21:22.870 "ffdhe6144", 00:21:22.870 "ffdhe8192" 00:21:22.870 ] 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "bdev_nvme_attach_controller", 00:21:22.870 "params": { 00:21:22.870 "name": "nvme0", 00:21:22.870 "trtype": "TCP", 00:21:22.870 "adrfam": "IPv4", 00:21:22.870 "traddr": "127.0.0.1", 00:21:22.870 "trsvcid": "4420", 00:21:22.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:22.870 "prchk_reftag": false, 00:21:22.870 "prchk_guard": false, 00:21:22.870 "ctrlr_loss_timeout_sec": 0, 00:21:22.870 "reconnect_delay_sec": 0, 00:21:22.870 "fast_io_fail_timeout_sec": 0, 00:21:22.870 "psk": "key0", 00:21:22.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:22.870 "hdgst": false, 00:21:22.870 "ddgst": false 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "bdev_nvme_set_hotplug", 00:21:22.870 "params": { 00:21:22.870 "period_us": 100000, 00:21:22.870 "enable": false 00:21:22.870 } 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "method": "bdev_wait_for_examine" 00:21:22.870 } 00:21:22.870 ] 00:21:22.870 }, 00:21:22.870 { 00:21:22.870 "subsystem": "nbd", 00:21:22.870 "config": [] 00:21:22.870 } 00:21:22.870 ] 00:21:22.870 }' 00:21:22.870 07:25:31 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.870 07:25:31 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.870 07:25:31 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.870 07:25:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:22.870 [2024-07-15 07:25:31.631297] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:21:22.870 [2024-07-15 07:25:31.631384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85273 ] 00:21:22.870 [2024-07-15 07:25:31.769971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.128 [2024-07-15 07:25:31.840991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.128 [2024-07-15 07:25:31.955946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:23.128 [2024-07-15 07:25:31.997559] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.694 07:25:32 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.694 07:25:32 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:23.694 07:25:32 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:23.694 07:25:32 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:23.694 07:25:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.261 07:25:32 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:24.261 07:25:32 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:24.261 07:25:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:24.261 07:25:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:24.261 07:25:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.261 07:25:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:24.261 07:25:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.579 07:25:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:24.579 07:25:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:24.579 07:25:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:24.579 07:25:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:24.579 07:25:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.579 07:25:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:24.579 07:25:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.579 07:25:33 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:24.579 07:25:33 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:24.579 07:25:33 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:24.579 07:25:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:24.837 07:25:33 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:24.838 07:25:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:24.838 07:25:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8N6b7iUIgh /tmp/tmp.uHYT2OEqdf 00:21:24.838 07:25:33 keyring_file -- keyring/file.sh@20 -- # killprocess 85273 00:21:24.838 07:25:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85273 ']' 00:21:24.838 07:25:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85273 00:21:24.838 07:25:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:24.838 07:25:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.838 07:25:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85273 00:21:25.096 killing process with pid 85273 00:21:25.096 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.096 00:21:25.096 Latency(us) 00:21:25.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.096 =================================================================================================================== 00:21:25.096 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85273' 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@967 -- # kill 85273 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@972 -- # wait 85273 00:21:25.096 07:25:33 keyring_file -- keyring/file.sh@21 -- # killprocess 85008 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85008 ']' 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85008 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.096 07:25:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85008 00:21:25.096 killing process with pid 85008 00:21:25.096 07:25:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.096 07:25:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.096 07:25:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85008' 00:21:25.096 07:25:34 keyring_file -- common/autotest_common.sh@967 -- # kill 85008 00:21:25.096 [2024-07-15 07:25:34.002071] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.096 07:25:34 keyring_file -- common/autotest_common.sh@972 -- # wait 85008 00:21:25.355 ************************************ 00:21:25.355 END TEST keyring_file 00:21:25.355 ************************************ 00:21:25.355 00:21:25.355 real 0m15.880s 00:21:25.355 user 0m41.185s 00:21:25.355 sys 0m2.816s 00:21:25.355 07:25:34 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:25.355 07:25:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:25.613 07:25:34 -- common/autotest_common.sh@1142 -- # return 0 00:21:25.613 07:25:34 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:21:25.613 07:25:34 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:25.613 07:25:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:25.613 07:25:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:25.613 07:25:34 -- common/autotest_common.sh@10 -- # set +x 00:21:25.613 ************************************ 00:21:25.613 START TEST keyring_linux 00:21:25.613 ************************************ 00:21:25.613 07:25:34 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:25.613 * Looking for test storage... 00:21:25.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:25.613 07:25:34 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:25.613 07:25:34 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=d3ffbb73-b196-4070-b8e4-0883df0bb9c9 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.613 07:25:34 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.613 07:25:34 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.613 07:25:34 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.613 07:25:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.613 07:25:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.613 07:25:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.613 07:25:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:25.613 07:25:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.613 07:25:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:25.614 /tmp/:spdk-test:key0 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:25.614 07:25:34 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:25.614 /tmp/:spdk-test:key1 00:21:25.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.614 07:25:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85387 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.614 07:25:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85387 00:21:25.614 07:25:34 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85387 ']' 00:21:25.614 07:25:34 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.614 07:25:34 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.614 07:25:34 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.614 07:25:34 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.614 07:25:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:25.872 [2024-07-15 07:25:34.595365] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:21:25.873 [2024-07-15 07:25:34.595678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85387 ] 00:21:25.873 [2024-07-15 07:25:34.730850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.873 [2024-07-15 07:25:34.791240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.873 [2024-07-15 07:25:34.821843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:26.131 07:25:34 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.131 07:25:34 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:26.131 07:25:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:26.131 07:25:34 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.131 07:25:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:26.131 [2024-07-15 07:25:34.959311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.131 null0 00:21:26.131 [2024-07-15 07:25:34.991256] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.131 [2024-07-15 07:25:34.991624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:26.131 07:25:35 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.131 07:25:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:26.131 237014830 00:21:26.131 07:25:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:26.131 1057449491 00:21:26.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.131 07:25:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85400 00:21:26.131 07:25:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85400 /var/tmp/bperf.sock 00:21:26.131 07:25:35 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85400 ']' 00:21:26.131 07:25:35 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:26.131 07:25:35 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.131 07:25:35 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.131 07:25:35 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.131 07:25:35 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.131 07:25:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:26.131 [2024-07-15 07:25:35.072696] Starting SPDK v24.09-pre git sha1 4835eb82b / DPDK 24.03.0 initialization... 00:21:26.131 [2024-07-15 07:25:35.072794] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85400 ] 00:21:26.390 [2024-07-15 07:25:35.214993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.390 [2024-07-15 07:25:35.273330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.390 07:25:35 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.390 07:25:35 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:26.390 07:25:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:26.390 07:25:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:26.957 07:25:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:26.957 07:25:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:27.216 [2024-07-15 07:25:35.938140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:27.216 07:25:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:27.216 07:25:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:27.474 [2024-07-15 07:25:36.215478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.474 nvme0n1 00:21:27.474 07:25:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:27.474 07:25:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:27.474 07:25:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:27.474 07:25:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:27.474 07:25:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:27.474 07:25:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:27.733 07:25:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:27.733 07:25:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:27.733 07:25:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:27.733 07:25:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:27.733 07:25:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:27.733 07:25:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:27.733 07:25:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:27.991 07:25:36 keyring_linux -- keyring/linux.sh@25 -- # sn=237014830 00:21:27.991 07:25:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:27.991 07:25:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:27.991 07:25:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 237014830 == \2\3\7\0\1\4\8\3\0 ]] 00:21:27.991 07:25:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 237014830 00:21:27.991 07:25:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:27.991 07:25:36 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:28.250 Running I/O for 1 seconds... 00:21:29.183 00:21:29.183 Latency(us) 00:21:29.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:29.184 nvme0n1 : 1.01 11010.90 43.01 0.00 0.00 11570.54 7864.32 19899.11 00:21:29.184 =================================================================================================================== 00:21:29.184 Total : 11010.90 43.01 0.00 0.00 11570.54 7864.32 19899.11 00:21:29.184 0 00:21:29.184 07:25:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:29.184 07:25:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:29.442 07:25:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:29.442 07:25:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:29.442 07:25:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:29.442 07:25:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:29.442 07:25:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:29.443 07:25:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.701 07:25:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:29.701 07:25:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:29.701 07:25:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:29.701 07:25:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:29.701 07:25:38 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:21:29.701 07:25:38 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:29.701 07:25:38 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:29.701 07:25:38 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.701 07:25:38 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:29.701 07:25:38 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.701 07:25:38 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:29.701 07:25:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:29.960 [2024-07-15 07:25:38.804782] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:29.960 [2024-07-15 07:25:38.805416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec3460 (107): Transport endpoint is not connected 00:21:29.960 [2024-07-15 07:25:38.806403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec3460 (9): Bad file descriptor 00:21:29.960 [2024-07-15 07:25:38.807400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:29.960 [2024-07-15 07:25:38.807422] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:29.960 [2024-07-15 07:25:38.807432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:29.960 request: 00:21:29.960 { 00:21:29.960 "name": "nvme0", 00:21:29.960 "trtype": "tcp", 00:21:29.960 "traddr": "127.0.0.1", 00:21:29.960 "adrfam": "ipv4", 00:21:29.960 "trsvcid": "4420", 00:21:29.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:29.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:29.960 "prchk_reftag": false, 00:21:29.960 "prchk_guard": false, 00:21:29.960 "hdgst": false, 00:21:29.960 "ddgst": false, 00:21:29.960 "psk": ":spdk-test:key1", 00:21:29.960 "method": "bdev_nvme_attach_controller", 00:21:29.960 "req_id": 1 00:21:29.960 } 00:21:29.960 Got JSON-RPC error response 00:21:29.960 response: 00:21:29.960 { 00:21:29.960 "code": -5, 00:21:29.960 "message": "Input/output error" 00:21:29.960 } 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@33 -- # sn=237014830 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 237014830 00:21:29.960 1 links removed 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@33 -- # sn=1057449491 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1057449491 00:21:29.960 1 links removed 00:21:29.960 07:25:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85400 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85400 ']' 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85400 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85400 00:21:29.960 killing process with pid 85400 00:21:29.960 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.960 00:21:29.960 Latency(us) 00:21:29.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.960 =================================================================================================================== 00:21:29.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85400' 00:21:29.960 07:25:38 keyring_linux -- common/autotest_common.sh@967 -- # kill 85400 00:21:29.961 07:25:38 keyring_linux -- common/autotest_common.sh@972 -- # wait 85400 00:21:30.219 07:25:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85387 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85387 ']' 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85387 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85387 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85387' 00:21:30.219 killing process with pid 85387 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 85387 00:21:30.219 07:25:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 85387 00:21:30.478 00:21:30.478 real 0m4.978s 00:21:30.478 user 0m10.301s 00:21:30.478 sys 0m1.369s 00:21:30.478 07:25:39 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.478 ************************************ 00:21:30.478 END TEST keyring_linux 00:21:30.478 ************************************ 00:21:30.478 07:25:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:30.478 07:25:39 -- common/autotest_common.sh@1142 -- # return 0 00:21:30.478 07:25:39 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:30.478 07:25:39 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:30.478 07:25:39 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:30.478 07:25:39 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:30.478 07:25:39 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:30.478 07:25:39 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:21:30.478 07:25:39 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:21:30.478 07:25:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.478 07:25:39 -- common/autotest_common.sh@10 -- # set +x 00:21:30.478 07:25:39 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:21:30.478 07:25:39 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:30.478 07:25:39 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:30.478 07:25:39 -- common/autotest_common.sh@10 -- # set +x 00:21:32.411 INFO: APP EXITING 00:21:32.411 INFO: killing all VMs 00:21:32.411 INFO: killing vhost app 00:21:32.411 INFO: EXIT DONE 00:21:32.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.669 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:32.669 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:33.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:33.605 Cleaning 00:21:33.605 Removing: /var/run/dpdk/spdk0/config 00:21:33.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:33.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:33.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:33.606 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:33.606 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:33.606 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:33.606 Removing: /var/run/dpdk/spdk1/config 00:21:33.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:33.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:33.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:33.606 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:33.606 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:33.606 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:33.606 Removing: /var/run/dpdk/spdk2/config 00:21:33.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:33.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:33.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:33.606 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:33.606 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:33.606 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:33.606 Removing: /var/run/dpdk/spdk3/config 00:21:33.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:33.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:33.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:33.606 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:33.606 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:33.606 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:33.606 Removing: /var/run/dpdk/spdk4/config 00:21:33.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:33.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:33.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:33.606 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:33.606 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:33.606 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:33.606 Removing: /dev/shm/nvmf_trace.0 00:21:33.606 Removing: /dev/shm/spdk_tgt_trace.pid58771 00:21:33.606 Removing: /var/run/dpdk/spdk0 00:21:33.606 Removing: /var/run/dpdk/spdk1 00:21:33.606 Removing: /var/run/dpdk/spdk2 00:21:33.606 Removing: /var/run/dpdk/spdk3 00:21:33.606 Removing: /var/run/dpdk/spdk4 00:21:33.606 Removing: /var/run/dpdk/spdk_pid58626 00:21:33.606 Removing: /var/run/dpdk/spdk_pid58771 00:21:33.606 Removing: /var/run/dpdk/spdk_pid58956 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59037 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59064 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59174 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59192 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59310 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59501 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59641 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59711 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59782 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59865 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59929 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59962 00:21:33.606 Removing: /var/run/dpdk/spdk_pid59998 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60059 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60153 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60584 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60625 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60676 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60692 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60759 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60762 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60829 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60832 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60883 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60888 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60929 00:21:33.606 Removing: /var/run/dpdk/spdk_pid60944 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61061 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61092 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61166 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61217 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61242 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61300 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61335 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61368 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61398 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61433 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61467 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61502 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61531 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61565 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61600 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61629 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61669 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61698 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61727 00:21:33.606 Removing: /var/run/dpdk/spdk_pid61767 00:21:33.865 Removing: /var/run/dpdk/spdk_pid61796 00:21:33.865 Removing: /var/run/dpdk/spdk_pid61831 00:21:33.865 Removing: /var/run/dpdk/spdk_pid61868 00:21:33.865 Removing: /var/run/dpdk/spdk_pid61900 00:21:33.865 Removing: /var/run/dpdk/spdk_pid61940 00:21:33.865 Removing: /var/run/dpdk/spdk_pid61970 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62042 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62130 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62426 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62442 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62474 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62488 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62503 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62528 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62540 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62559 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62578 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62591 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62607 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62626 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62639 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62655 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62674 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62687 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62703 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62722 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62741 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62751 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62787 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62795 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62830 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62888 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62917 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62921 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62955 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62959 00:21:33.865 Removing: /var/run/dpdk/spdk_pid62973 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63010 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63029 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63052 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63067 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63071 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63085 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63090 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63100 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63109 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63113 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63147 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63168 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63183 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63206 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63221 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63223 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63264 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63275 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63302 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63309 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63317 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63324 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63332 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63339 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63347 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63354 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63423 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63469 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63575 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63603 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63649 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63669 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63680 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63700 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63737 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63747 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63817 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63833 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63877 00:21:33.865 Removing: /var/run/dpdk/spdk_pid63950 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64002 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64033 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64119 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64161 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64194 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64418 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64510 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64533 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64853 00:21:33.865 Removing: /var/run/dpdk/spdk_pid64891 00:21:34.123 Removing: /var/run/dpdk/spdk_pid65170 00:21:34.123 Removing: /var/run/dpdk/spdk_pid65585 00:21:34.123 Removing: /var/run/dpdk/spdk_pid65854 00:21:34.123 Removing: /var/run/dpdk/spdk_pid66637 00:21:34.123 Removing: /var/run/dpdk/spdk_pid67447 00:21:34.123 Removing: /var/run/dpdk/spdk_pid67563 00:21:34.123 Removing: /var/run/dpdk/spdk_pid67631 00:21:34.123 Removing: /var/run/dpdk/spdk_pid68906 00:21:34.123 Removing: /var/run/dpdk/spdk_pid69093 00:21:34.123 Removing: /var/run/dpdk/spdk_pid72581 00:21:34.123 Removing: /var/run/dpdk/spdk_pid72890 00:21:34.123 Removing: /var/run/dpdk/spdk_pid72998 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73120 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73142 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73176 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73198 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73283 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73410 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73558 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73639 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73832 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73902 00:21:34.123 Removing: /var/run/dpdk/spdk_pid73996 00:21:34.123 Removing: /var/run/dpdk/spdk_pid74295 00:21:34.123 Removing: /var/run/dpdk/spdk_pid74680 00:21:34.123 Removing: /var/run/dpdk/spdk_pid74682 00:21:34.123 Removing: /var/run/dpdk/spdk_pid74953 00:21:34.123 Removing: /var/run/dpdk/spdk_pid74967 00:21:34.123 Removing: /var/run/dpdk/spdk_pid74987 00:21:34.123 Removing: /var/run/dpdk/spdk_pid75012 00:21:34.123 Removing: /var/run/dpdk/spdk_pid75022 00:21:34.123 Removing: /var/run/dpdk/spdk_pid75329 00:21:34.123 Removing: /var/run/dpdk/spdk_pid75374 00:21:34.123 Removing: /var/run/dpdk/spdk_pid75662 00:21:34.123 Removing: /var/run/dpdk/spdk_pid75864 00:21:34.123 Removing: /var/run/dpdk/spdk_pid76227 00:21:34.123 Removing: /var/run/dpdk/spdk_pid76738 00:21:34.123 Removing: /var/run/dpdk/spdk_pid77587 00:21:34.123 Removing: /var/run/dpdk/spdk_pid78180 00:21:34.123 Removing: /var/run/dpdk/spdk_pid78182 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80080 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80140 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80193 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80253 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80362 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80415 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80470 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80529 00:21:34.123 Removing: /var/run/dpdk/spdk_pid80825 00:21:34.123 Removing: /var/run/dpdk/spdk_pid81991 00:21:34.123 Removing: /var/run/dpdk/spdk_pid82135 00:21:34.123 Removing: /var/run/dpdk/spdk_pid82366 00:21:34.123 Removing: /var/run/dpdk/spdk_pid82899 00:21:34.123 Removing: /var/run/dpdk/spdk_pid83051 00:21:34.123 Removing: /var/run/dpdk/spdk_pid83204 00:21:34.123 Removing: /var/run/dpdk/spdk_pid83301 00:21:34.123 Removing: /var/run/dpdk/spdk_pid83456 00:21:34.123 Removing: /var/run/dpdk/spdk_pid83564 00:21:34.123 Removing: /var/run/dpdk/spdk_pid84203 00:21:34.123 Removing: /var/run/dpdk/spdk_pid84238 00:21:34.123 Removing: /var/run/dpdk/spdk_pid84278 00:21:34.123 Removing: /var/run/dpdk/spdk_pid84520 00:21:34.123 Removing: /var/run/dpdk/spdk_pid84562 00:21:34.123 Removing: /var/run/dpdk/spdk_pid84592 00:21:34.123 Removing: /var/run/dpdk/spdk_pid85008 00:21:34.123 Removing: /var/run/dpdk/spdk_pid85017 00:21:34.123 Removing: /var/run/dpdk/spdk_pid85273 00:21:34.123 Removing: /var/run/dpdk/spdk_pid85387 00:21:34.123 Removing: /var/run/dpdk/spdk_pid85400 00:21:34.123 Clean 00:21:34.123 07:25:43 -- common/autotest_common.sh@1451 -- # return 0 00:21:34.123 07:25:43 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:21:34.123 07:25:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.123 07:25:43 -- common/autotest_common.sh@10 -- # set +x 00:21:34.381 07:25:43 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:21:34.381 07:25:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.381 07:25:43 -- common/autotest_common.sh@10 -- # set +x 00:21:34.381 07:25:43 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:34.381 07:25:43 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:34.381 07:25:43 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:34.381 07:25:43 -- spdk/autotest.sh@391 -- # hash lcov 00:21:34.382 07:25:43 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:34.382 07:25:43 -- spdk/autotest.sh@393 -- # hostname 00:21:34.382 07:25:43 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:34.640 geninfo: WARNING: invalid characters removed from testname! 00:22:06.716 07:26:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:08.100 07:26:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:11.390 07:26:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:13.919 07:26:22 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.210 07:26:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:19.738 07:26:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.021 07:26:31 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:23.021 07:26:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:23.021 07:26:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:23.021 07:26:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.021 07:26:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.021 07:26:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.021 07:26:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.021 07:26:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.021 07:26:31 -- paths/export.sh@5 -- $ export PATH 00:22:23.021 07:26:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.021 07:26:31 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:23.021 07:26:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:23.021 07:26:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721028391.XXXXXX 00:22:23.021 07:26:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721028391.p7cE9f 00:22:23.021 07:26:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:23.021 07:26:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:23.021 07:26:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:23.021 07:26:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:23.021 07:26:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:23.021 07:26:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:23.021 07:26:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:23.021 07:26:31 -- common/autotest_common.sh@10 -- $ set +x 00:22:23.021 07:26:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:23.021 07:26:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:23.021 07:26:31 -- pm/common@17 -- $ local monitor 00:22:23.021 07:26:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:23.021 07:26:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:23.021 07:26:31 -- pm/common@25 -- $ sleep 1 00:22:23.021 07:26:31 -- pm/common@21 -- $ date +%s 00:22:23.021 07:26:31 -- pm/common@21 -- $ date +%s 00:22:23.021 07:26:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721028391 00:22:23.021 07:26:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721028391 00:22:23.021 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721028391_collect-vmstat.pm.log 00:22:23.021 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721028391_collect-cpu-load.pm.log 00:22:23.589 07:26:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:23.589 07:26:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:23.589 07:26:32 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:23.589 07:26:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:23.589 07:26:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:23.589 07:26:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:23.589 07:26:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:23.589 07:26:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:23.589 07:26:32 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:23.589 07:26:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:23.589 07:26:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:23.589 07:26:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:23.589 07:26:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:23.589 07:26:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:23.589 07:26:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:23.589 07:26:32 -- pm/common@44 -- $ pid=87144 00:22:23.589 07:26:32 -- pm/common@50 -- $ kill -TERM 87144 00:22:23.589 07:26:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:23.589 07:26:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:23.589 07:26:32 -- pm/common@44 -- $ pid=87145 00:22:23.589 07:26:32 -- pm/common@50 -- $ kill -TERM 87145 00:22:23.589 + [[ -n 5151 ]] 00:22:23.589 + sudo kill 5151 00:22:23.599 [Pipeline] } 00:22:23.620 [Pipeline] // timeout 00:22:23.626 [Pipeline] } 00:22:23.649 [Pipeline] // stage 00:22:23.656 [Pipeline] } 00:22:23.675 [Pipeline] // catchError 00:22:23.685 [Pipeline] stage 00:22:23.687 [Pipeline] { (Stop VM) 00:22:23.702 [Pipeline] sh 00:22:23.997 + vagrant halt 00:22:28.180 ==> default: Halting domain... 00:22:33.481 [Pipeline] sh 00:22:33.759 + vagrant destroy -f 00:22:37.935 ==> default: Removing domain... 00:22:37.947 [Pipeline] sh 00:22:38.231 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:38.240 [Pipeline] } 00:22:38.253 [Pipeline] // stage 00:22:38.257 [Pipeline] } 00:22:38.270 [Pipeline] // dir 00:22:38.277 [Pipeline] } 00:22:38.295 [Pipeline] // wrap 00:22:38.302 [Pipeline] } 00:22:38.319 [Pipeline] // catchError 00:22:38.330 [Pipeline] stage 00:22:38.332 [Pipeline] { (Epilogue) 00:22:38.348 [Pipeline] sh 00:22:38.628 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:46.792 [Pipeline] catchError 00:22:46.794 [Pipeline] { 00:22:46.808 [Pipeline] sh 00:22:47.081 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:47.339 Artifacts sizes are good 00:22:47.349 [Pipeline] } 00:22:47.369 [Pipeline] // catchError 00:22:47.382 [Pipeline] archiveArtifacts 00:22:47.389 Archiving artifacts 00:22:47.570 [Pipeline] cleanWs 00:22:47.584 [WS-CLEANUP] Deleting project workspace... 00:22:47.584 [WS-CLEANUP] Deferred wipeout is used... 00:22:47.618 [WS-CLEANUP] done 00:22:47.620 [Pipeline] } 00:22:47.636 [Pipeline] // stage 00:22:47.641 [Pipeline] } 00:22:47.658 [Pipeline] // node 00:22:47.666 [Pipeline] End of Pipeline 00:22:47.702 Finished: SUCCESS